Guest article by Daniele Scarpi and Eleonora Pantano.
In recent years, the surge in automation, artificial intelligence (AI), and machine learning algorithms has largely impacted retail service and subsequent consumer experiences.
For instance, Alibaba’s FlyZoo Hotel relies on AI robots for everything from check-in to room management, while Hilton tests robotic receptionists; while beauty brands employ AI for personalized recommendations and devoted customer service online and in-store.
However, this technological progress also raises ethical concerns, prompting National Governments and Intergovernmental Commissions (e.g., the European Commission) to propose new regulatory frameworks addressing AI risks. Despite data protection laws like GDPR, the actual usage of AI remains scarcely unregulated, highlighting the need for comprehensive guidelines.
Corporate Digital Responsibility (CDR) emerges as a crucial concept to address these unique challenges, going beyond legal obligations. It encompasses shared values guiding an organization’s use of digital technology, pushing for ethical and responsible actions.
Yet, the ethical application of AI and the development of AI with almost underdeveloped moral intelligence pose ongoing challenges. For instance, some companies engage in unethical practices, employing “machine washing” (i.e., unethical AI practices). Thus, scholars and practitioners are actively exploring the role of CDR in service automation, especially in retail. Accordingly, we explore the concept of Consumer Data Responsibility (CDR) and its role in shaping the Artificial Intelligence Responsibility in Retail Service Automation (AIRRSA) (Organizational Dynamics, 2024). As AI applications become widespread in retail, AIRRSA becomes imperative.
CDR and AIRRSA differ: CDR primarily focuses on appropriate data management, encompassing legal and ethical aspects. Instead, AIRRSA specifically targets the ethical implications and considerations related to AI algorithms and systems in retail. Thus, AIRRSA focuses on the ethical use of AI technologies in the retail industry and related automation only. The link between CDR and AIRRSA lies in the responsible handling of data is fundamental for training and developing ethical AI systems. Furthermore, both CDR and AIRRSA are rooted in responsible data management for the ethical development of AI systems.
Consumer Data Responsibility (CDR) and AI types
CDR is a framework that emphasizes appropriate data management, encompassing legal and ethical aspects. In the context of AI in retail, CDR becomes a critical tool to ensure the responsible handling of consumer data, articulated alongside data quality and acquisition, data storage and access, and data responsibility. These considerations extend data equity to ensure that the data accessed is high quality, derived from transparent acquisition methods, and stored securely, considering the entire data lifecycle, from acquisition to storage, with a commitment to ethical and reliable data management practices.
Furthermore, as we delve into various types of AI, it becomes evident that different considerations and solutions are required for each. In particular, different AI types impact privacy and ethical concerns:
- Verbal-Linguistic Intelligence: it allows AI to engage with consumers through language. These systems must embed ethical norms that both consumers and retailers share. CDR norms can be developed and implemented to ensure transparent interactions and security for consumers during their shopping experience.
- Logic-Mathematical Intelligence: it excels at processing and analyzing data. The concern here lies in potential automated decision-making without consumer knowledge or consent. Thus, CDR should evolve, ensuring a balance between retailer advantages and consumer privacy.
- Visual-Spatial Intelligence: including facial recognition technologies, it raises significant privacy concerns. From tracking emotions to potential discrimination, retailers must implement specific regulations for transparent visual data collection. CDR plays a crucial role in limiting access to this data and avoiding emotional manipulation.
- Social Intelligence: it allows AI to respond to human social cues, relying on facial expressions, tone of voice, and behavioral patterns. To avoid deceptive practices and emotional manipulation, retailers need to implement regulations that uphold fairness and ethical use of social cues gathered during interactions.
- Processing-Speed Intelligence: it allows AI to offer efficiency but may perpetuate biases if trained on biased data. CDR applications need to include best practices for identifying and mitigating biases, ensuring fair outcomes for all consumers.
Value from AIRRSA: responsible data management
The value derived from AIRRSA lies in legal compliance, positive reputation, and risk mitigation. Responsible data management builds trust, enhances reputation, and minimizes the risk of data breaches, fostering a competitive advantage for retailers. Thus, value in AIRRSA stems from data quality, data acquisition transparency, and data storage security, which, ultimately, translate into data responsibility. Data responsibility extends data equity to ensure that the data accessed is high quality, derived from transparent acquisition methods, and stored securely: it envisions the entire data lifecycle, from acquisition to storage, with a commitment to ethical and reliable data management practices, creating value and fulfilling a societal role comprising retailers’ ethics, fairness, and privacy protection.
Challenges for Different Stakeholders
As AI continues to transform the retail industry, various stakeholders face challenges that CDR can help mitigate:
- Scholars need to collaborate across disciplines to develop a unified theory of ethical AI. Understanding consumers’ role in ethical AI, and balancing AI advancements with risks present crucial research challenges.
- Managers must explore the profitability of integrating ethical AI, introduce ethical governance mechanisms for AI, and foster constant dialogues with stakeholders to optimize AI usage for all parties involved.
- Policy-makers face the challenge of crafting new regulations to ensure AI benefits consumers without causing harm. Defining maximum tolerable risks, minimum advantages for consumers, and legal responsibilities for AI errors are critical aspects.
- Developers must introduce safeguards for privacy in coding AI algorithms, ensuring human oversight for critical outcomes, and implementing mechanisms of system recovery in case of errors.
Looking Ahead: Ethical AI in Retail
As we navigate the evolving landscape of AI in retail, CDR and AIRRSA become paramount. They are not merely compliance measures but a guiding framework to ensure the ethical use of AI, benefiting consumers and retailers alike.
Summarizing, responsible AI in retail is a collective effort that involves scholars, managers, policy-makers, and developers. By embracing CDR and AIRRSA principles and understanding the nuances of different AI intelligences, we can build a future where AI enhances the retail experience ethically, delivering value to both consumers and retailers. The journey toward ethical (moral?) AI in retail is ongoing, but with CDR and AIRRSA as our compass, we can steer toward a responsible and brighter future.
Daniele Scarpi
Associate Professor of Marketing,
Department of Management,
University of Bologna
Eleonora Pantano
Associate Professor in Retail and Marketing Technology,
School of Management,
University of Bristol Business School