A guest article by Werner Kunz, Jochen Wirtz, Nicole Hartley, and James Tarbit. 

The age of AI is upon us, changing our lives in ways we could never have imagined. Through emerging advances in AI technology, we are seeing an array of new applications that are transforming industries and improving our daily routines. Examples of these AI applications include: 

  • Personal assistants (e.g., Amazon’s Alexa, Apple’s Siri, Replika) that can develop empathetic relationships with users or introduce new purchasing habits into our routines.
  • Medical diagnosis systems and AI assistants (e.g., Ada or Kareo) that can identify diseases and recommend treatments.
  • AI-powered text-to-image algorithms (e.g., Dall-e and Stable Fusion) that can generate images based on text commands.
  • AI-powered large language models (e.g., ChatGPT or Jasper) that can write content and code programs based on user requests.

As AI and digital technologies based on big data become increasingly sophisticated and are used to bring significant improvements in service quality and productivity, it is crucial to consider the ethical implications of these technologies. For instance, AI systems that make autonomous decisions about individuals, such as loan acceptance or insurance policies, can produce biased outcomes if the algorithms used to train the AI are not designed to be fair and unbiased. This can lead to situations where AI systems make unethical or unfair decisions without anyone being able to intervene. Further ethical issues raised include coercion of data disclosure, dehumanization and threat to human dignity, social deprivation, disempowerment, and social engineering. Despite these concerns, there is relatively little research on how service organizations can navigate these ethical dilemmas related to digital technology.

The Concept of CDR and Relevance for Service

As data combined with digital technologies are used to make service decisions, serve customers, and generate revenue, there is a need to make these behaviors and decisions accountable to moral norms and ethical considerations. This is the focus of corporate digital responsibility (CDR), which, until very recently, had not been explored in a service context. CDR refers to “the principles underpinning a service firm’s ethical, fair, and protective use of data and technology when engaging with customers within their digital service ecosystem.” (Wirtz et al., 2023).

CDR is important for service markets as service firms tend to have closer relationships with customers, more touchpoints, and process more consumer data than goods companies. Services are also easier to digitize than physical goods, intensifying the likelihood of CDR issues occurring. The increased digitization of service offerings results in an expanding stream of data and integration of various data types into service value chains. Service firms use tools like cookies, tracking pixels, and facial recognition to capture and integrate data from every customer and interaction. AI is then used to process and analyze the data, and train models to make predictions and optimize customer experiences. 

However, the complexity of these AI systems means that even the designers of these systems may not fully understand how these systems generate results and the potential harms associated with these technologies. These practices raise ethical concerns related to surveillance capitalism, customer vulnerabilities, privacy, fairness, and other potential harms associated with the usage of AI. In light of these issues, service firms need to carefully consider the implications of CDR on their relationships with customers and the potential damages associated with the use of data and digital technology in their value chains.

The Data and Technology Life-Cycle 

CDR issues are assigned into three categories by the type of harm inflicted: ethical, fairness, and privacy. Ethical issues include coercion and threat to human dignity; fairness issues include biases in algorithm-based decision-making and procedural unfairness; and privacy violations include unwanted marketing and surveillance. In the extant literature, ethical, fairness, and privacy issues are discussed in isolation. However, a more integrated approach can help us better understand how data and digital technologies are connected, and the issues that arise. By looking at the key literature on these topics through the lens of CDR, we can better understand the risks and concerns that consumers face when using digital technologies (For a more in-depth analysis, please click here). 

Within service value chains, CDR issues related to data and technology occur across four distinct life-cycle stages: creation, operation, refinement, and retention. Each stage poses different challenges for service firms pertaining to data and technologies, and these stages often progress at different rates. As data is constantly being captured and processed, the technology cycle moves more slowly (as shown in Figure 1, where data is represented by the small, fast-moving cog and technology by the larger, slower-moving cog).

Figure 1. CDR life-cycles of technology and data

The Digital Service Ecosystems – The Origin for CDR issues

The application of AI has led to the formation of a new digital service ecosystem, with a service firm at the center orchestrating value creation, exchange, and capture among customers, complementors, and external business partners. The ecosystem shows the service firm’s business model from two perspectives: front-end and back-end, with related flows of money, service, data, insights, and technologies (see Figure 2).

Figure 2: The digital service ecosystem model

The Front-end of the Digital Service Ecosystems

The front-end of the ecosystem is the customer-facing side, which includes the service delivery system and customer interface. Customers allow service firms to capture their data in exchange for digitally-optimized service delivery and other value-enhancing service outcomes. The value of customer data can be so high that customers are sometimes offered the service for free. Consumer data collected by service firms are not only useful for their own purposes, but they can also be used by business partners on the back-end to train and optimize AI predictive models. This practice can raise concerns about customer data being used without consent or being subject to ethical risks, as a service interface may not provide consumers any capacity to control who sees and uses their data.

The Back-end of the Digital Service Ecosystems

The back-end of the digital service ecosystem includes complementors and external business partners. Complementors could be traditional supply chain partners who provide services and goods that contribute to the service firm’s value chain, and digital technology partners who provide services and technologies such as AI and robots that are trained and powered by data from the service firm.

Sharing market information and analytics can improve supply chain productivity, but it can also raise concerns about data capture, consumer privacy, and algorithmic biases. Anonymizing individual customer data may address some issues, but CDR concerns will still arise. These practices also increase the difficulty for customers to opt out of potential ethical, privacy, or fairness risks incurred by sharing data with external partners. Service firms and digital technology partners need to discuss CDR risks and work to mitigate potential issues.

Service firms can generate additional revenue by selling customer data and insights to external business partners. However, the sheer magnitude and complex nature of data sharing in digital service ecosystems often make it impossible to determine where data comes from and whether consumer privacy regulations are being followed. These practices have been criticized, with Zuboff calling it “surveillance capitalism” and defining it as “a new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales” (Zuboff 2015, p. 8).

As technology and AI becomes increasingly ingrained into our everyday lives, the harms associated with ethical, privacy, and fairness issues of CDR will increasingly amplify over time. In mitigating these harms, service firms can implement positive CDR practices in their value chains and organizations to protect the interests of consumers. In our next SERVSIG blog series on CDR, we will discuss the challenges of practicing good CDR and potential tools and practices for achieving it.


Werner Kunz,
Professor of Marketing at University of Massachusetts Boston, U.S.A.
Jochen Wirtz
Vice Dean MBA Programmes and Professor of Marketing at NUS Business School, National University of Singapore, Singapore,
Nicole Hartley,
MBA Director and Associate Professor at The University of Queensland, Australia
James Tarbit,
PhD Candidate at The University of Queensland, Australia

Reference:

  • Wirtz, J., Kunz, W.H., Hartley, N., Tarbit, J. (2023). Corporate digital responsibility in service firms and their ecosystems. Journal of Service Research, forthcoming. https://doi.org/10.1177/10946705221130467
  • Zuboff, Shoshana (2015), “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization,” Journal of Information Technology, 30 (1), 75-89.

Comments

comments