Guest article by Hsin-Hsuan Meg Lee and Lorena Blasco-Arcas.*
Imagine a global beauty brand that launches a campaign using a virtual influencer, “Eva,” created through advanced AI technology. Initially, Eva successfully drives the brand’s engagement across various demographics by delivering personalized content that resonates with diverse consumer groups. However, complexities arise as Eva becomes integral to the brand’s marketing strategy. The AI algorithms driving Eva begin to generate content that inadvertently emphasizes certain beauty standards over others, alienating significant segments of the target audience and sparking backlash on social media platforms. This unintended consequence reveals inconsistencies and biases in the AI’s programming, leading to public distrust.
As AI continues to transform business landscapes by enhancing operational efficiency and strategic decision-making, its broader social implications become increasingly relevant. Scholarly studies, such as those by Makarius et al. (2022) and Holmström (2022), provide deep insights into AI capabilities but often overlook the complexities of integrating technology with humanity. Recent scholarly work emphasizes the importance of responsible system designs that benefit businesses and society (Arrieta et al., 2020; Kolbjørnsrud, 2024; Lepri et al., 2021).
Amidst increasing demands for AI systems that are explainable, interpretable, and transparent (Ashok et al., 2022), it is essential to emphasize principles of responsibility, resilience, and respectfulness in technology adoption when strategizing AI (figure 1). These principles, rooted in the philosophical tenets of beneficence, nonmaleficence, justice, and autonomy (Floridi et al., 2018), often face challenges in practical applications but are crucial for aligning technological advancements with human values. We adopt the viewpoint established by de Ruyter et al. (2022), building upon the stewardship theory of Hernandez (2008). This theory underscores the importance of balancing individual and organizational objectives for the collective benefit of society. This concept is particularly pertinent when considering the far-reaching effects of AI and the potential dynamics between the different system levels (individual, organizational, and societal) to achieve beneficial collective outcomes.
These principles can be realized and practiced at individual, organizational, and societal levels, illustrating the integrated impact that the use of AI can have across its spectrum. Integrating AI across these levels presents a complex interplay of benefits, risks, and responsibilities. It is essential to explore both potential conflicts and synergies that emerge from this integration to ensure that AI technologies are deployed in a manner that benefits stakeholders.
At the individual level, AI has the potential to enhance personal convenience and efficiency, from personalized recommendations in shopping to adaptive learning environments in education. However, these benefits often come at the cost of privacy and personal autonomy, creating conflicts when organizational goals for data utilization clash with individual’s rights to privacy.
At the organizational level, companies seek to leverage AI for operational efficiency and competitive advantage. This drive can lead to synergies, such as improved employee productivity through automation. However, it may conflict with societal ethical standards when efficiency-driven practices lead to job displacement or when AI decision-making systems, designed to maximize profits, inadvertently reinforce biases.
At the societal level, AI can support large-scale public benefits, such as enhancing public healthcare systems or improving urban planning through data analysis. Yet, societal goals for equitable AI use can be at odds with organizational priorities, particularly when the pursuit of profit overlooks broader social implications like surveillance or socioeconomic disparities.
Aligning AI implementation with humanity: Soul, Head, Heart, Hand
Turning the principles into practices, we build on the framework proposed by Laasch et al. (2023), which underscores the critical dimensions of individuals and organizations to develop responsible, respectful, and resilient principles: Soul, Head, Heart, and Hand. These dimensions operationalize the core principles into actionable guidelines that respect and integrate the complexities of AI within responsible boundaries, ensuring AI’s ethical and societal alignment. Table 1 summarizes the focus of each dimension’s focus and exemplary managerial considerations for implementation.
The Soul dimension emphasizes a principled commitment to core values in AI development. This dimension ensures that AI systems not only adhere to ethical norms but actively enhance societal well-being, embodying the principle of responsibility by advocating for systems that align with fundamental human values. Organizations should integrate value-based governance systems that not only comply with regulations but also proactively champion ethical practices.
The Head dimension addresses the intellectual requirements necessary for AI implementation, including strategic alignment, resource availability, and knowledge management. It reflects the resilience principle by preparing organizations to be “AI-ready,” foreseeing and managing ethical implications such as data privacy and algorithmic bias.
The Heart dimension stresses the importance of emotional intelligence in AI contexts (Huang et al., 2019). This dimension is crucial for developing empathetic interactions between humans and AI systems, thereby influencing trust and psychological well-being. It embodies the respectfulness principle by fostering an inclusive corporate culture that values emotional connections and diverse perspectives.
The Hand dimension focuses on the practical application and interaction of AI within organizational settings, assessing how AI is integrated into workflows and its impact on employment dynamics. This dimension ensures that AI deployments are managed to optimize both operational effectiveness and ethical considerations, supporting the core principles of responsibility and resilience.
Dimension | Focus | Exemplary managerial considerations |
Soul | Commitment to core values on AI implementation | Cultivate a culture that fosters ethical reflection and dialogue about AI implications at all levels, ensuring that AI strategies are consistently reviewed for ethical alignment.Actively shape industry standards and governmental policies to promote ethical practices in AI, reflecting a commitment to societal well-being and responsible innovation. |
Head | Intellectual requirements of AI implementation | Ensure that AI strategies are integrated with business objectives while also meeting ethical standards, aiming for resilience in adapting to new challenges and technologies. Implement ongoing educational initiatives that enhance employees’ and stakeholders’ understanding of AI ethics and technology, promoting an organization-wide uplift in AI competence and ethical awareness. |
Heart | Developing empathetic AI-Human interactions | Develop AI systems that respect and reflect the diversity of users, incorporating varied cultural and emotional perspectives to build genuinely inclusive systems.Involve community feedback in the AI development process to understand and address specific emotional and social needs, ensuring that AI solutions foster societal harmony and respectfulness. |
Hand | Practical application of AI | Conduct ethical audits regularly and ensure compliance with internal guidelines and external regulations, enhancing transparency and accountability in AI applications. Perform comprehensive impact assessments to understand the long-term effects of AI on employment, society, and the environment, aiming to make AI deployments not only effective but also socially responsible and resilient. ensuring that AI solutions foster societal harmony and respectfulness. |
These dimensions—Soul, Head, Heart, and Hand—form a comprehensive framework guiding individuals, organizations, and societies in the responsible implementation of AI. By integrating these dimensions, entities ensure that their AI strategies are not only effective but also align with broader ethical standards and contribute positively to society. This seamless integration of principles into practice facilitates the strategic deployment of AI technologies, shaping the environments within which they operate and ensuring alignment with both human values and technological advancements.
By grounding AI development and implementation in these principles, organizations ensure their technology progresses ethically, beneficially, and respectfully, contributing to a more equitable and sustainable future. This framework not only guides the strategic deployment of AI technologies but also shapes the environments within which they operate, aligning development with both human values and technological advancements. Through thoughtful integration of these principles, organizations can lead the way in demonstrating that technology can indeed advance in harmony with humanity.
Hsin-Hsuan Meg Lee
Associate Professor of Marketing
ESCP Europe, London Campus
Lorena Blasco-Arcas
Associate Professor of Marketing
ESCP Business School, Madrid Campus
* This article was developed based on discussions and exchanges with the members of the TRACIS research center**. We want to thank the members Markus Bick, Chuanwen Dong, Chi Hoang, Oliver Laasch, Vitor Lima, Laetitia Mimoun, Maximilian Weis, Erik Hermann, and Raga Teja Sudhams Kanaparthi for their input.
** The TRACIS (Transformative Research on AI for Companies, Individuals, and Society) research center at ESCP Business School aims to investigate and understand the transformative impact of disruptive technologies, particularly artificial intelligence (AI) and AI-enabled service robots, on society, organizations, and individuals. Their members are experts in areas such as marketing, organizational behavior, management, and information systems. We integrate cross-disciplinary insights to advance knowledge on responsible AI and service robot implementation, guide public policy, influence teaching, and disseminate findings to the broader public. By focusing on responsibility, resilience, and respectfulness, the center aims to harmonize business objectives with societal needs, enhance sustainable growth, and promote equitable benefits of AI and service robots. As such, the center also advocates for human-centric implementation of these technologies.
References
Ashok, M., Madan, R., Joha, A., & Sivarajah, U. (2022). Ethical framework for Artificial Intelligence and Digital technologies. International Journal of Information Management, 62, 102433.
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., … & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115.
de Ruyter, K., Keeling, D. I., Plangger, K., Montecchi, M., Scott, M. L., & Dahl, D. W. (2022). Reimagining marketing strategy: driving the debate on grand challenges. Journal of the Academy of Marketing Science, 50(1), 13-21.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., et al. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707.
Hernandez, M. (2008). Promoting stewardship behavior in organizations: A leadership model. Journal of business ethics, 80, 121-128.
Holmström, J. (2022). From AI to digital transformation: The AI readiness framework. Business Horizons, 65(3), 329-339.
Huang, M. H., Rust, R., & Maksimovic, V. (2019). The feeling economy: Managing in the next generation of artificial intelligence (AI). California Management Review, 61(4), 43-65.
Kolbjørnsrud, V. (2024). Designing the Intelligent Organization: Six Principles for Human-AI Collaboration. California Management Review, 66(2), 44-64.
Laasch, O., Moosmayer, D. C., & Antonacopoulou, E. P. (2023). The interdisciplinary responsible management competence framework: an integrative review of ethics, responsibility, and sustainability competences. Journal of Business Ethics, 187(4), 733-757.
Lepri, B., Oliver, N., & Pentland, A. (2021). Ethical machines: The human-centric use of artificial intelligence. IScience, 24(3).
Makarius, E. E., Mukherjee, D., Fox, J. D., & Fox, A. K. (2020). Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization. Journal of Business Research, 120, 262-273.
Illustration generated in Bing.