Guest article by Werner H. Kunz and Jochen Wirtz

A lot has happened since the concept of corporate digital responsibility (CDR) was introduced to the scientific community (Lobschadt et al. 2021). Whereas in the past, being digitally responsible meant primarily ensuring the safe and secure management of customer data, including, for example, asking for permission to collect customer data, securing the data, and using it only for the purpose for which it was collected. Today, being digitally responsible has become a much more complex task for organizations. 

With the advent Gen AI, the perspective on the capabilities and dangers of digital technologies has revolutionized service industries. While protecting individual customer data is more important than ever, companies face new issues due to a radical digital services revolution. Here are five CDR issues that will become even more critical in the age of AI:

How to Live with a Black Box?

Large language models (LLMs) powered by AI are incredibly capable, but they have a major flaw – they can sometimes produce incorrect, misleading, or even harmful information. This is known as “AI hallucination.” For businesses that rely heavily on AI, this is a big risk. Imagine making critical decisions based on inaccurate data from these “black box” AI systems where even their own developers don’t fully understand how they work (Rahwan et al., 2019). The lack of transparency in these cases is a serious concern, and it may be intentional, driven by profit motives.

In today’s environment, where trust is key for businesses, organizations can’t just blindly trust that AI outputs are 100% accurate. Simply hoping for the best is not a responsible business strategy. Investing in understanding the black box of AI is not just a prudent business move; it’s an ethical necessity. Businesses that ignore these risks are putting their trust and reputation on the line.

What does Intellectual Property Mean in an AI Age?

AI systems are built to collect and analyze massive amounts of data from user interactions. This data is then used to train systems, making them more intelligent and efficient. However, in this process, there is a high risk of violating intellectual property rights by reusing copyrighted content from the Internet without explicit permission. This can be detrimental to the reputation of the original creators and the value of their work. To protect intellectual property rights, it is imperative for AI systems to have robust systems in place to track and attribute their output to the original content creators. Anything less than this level of respect for artists and authors would be unacceptable.

Although direct infringement is a common phenomenon of poor CDR in the AI space, and many prominent AI companies have been caught harvesting databases without permission, it is only the tip of the iceberg when it comes to property rights. The real value of the database is the insights that can be generated from it. Once an AI system “understands” the patterns in the database, it doesn’t need the raw data and can generate results of equal quality without infringing copyright. Thus, do firms own the rights on the insights of copyright material? Our society must find ways to copyright the insights from data that doesn’t belong to a company.

The Atomic Bomb Moment of AI

AI offers incredible opportunities to transform how we work by boosting efficiency and enabling new services at low costs. However, as powerful as AI is, it also carries major risks of being misused and abused. Bad actors could leverage AI systems for nefarious purposes like spam, scams, harassment, spreading disinformation, impersonating people for fraud, or leaking sensitive data. These abuse cases seriously threaten people’s well-being and a company’s reputation. Compounding the risks, many advanced AI models remain opaque “black boxes” that are extremely difficult to interpret or predict. If critical AI systems run amok in ways we don’t understand, it sets off a chain reaction of evil.

It is crucial to ensure the security and regulation of AI systems to prevent any negative scenarios from happening. While AI presents amazing potential upsides, we must stay vigilant about guarding against the downsides. Having strong authentication and verification systems to confirm their identity and output is essential. Additionally, it is crucial to have human experts to monitor and guide the AI systems with feedback and guidance. These systems must also be governed by strict codes of ethics and legal guidelines that define acceptable use policies.

The Value of Human Touch in an AI Age?

In many knowledge-based service sectors, including law, financial planning, marketing, communications, and creative industries, Gen AI now has the ability to undertake tasks that were previously handled by humans within service firms. With many of these tasks now able to be performed by AI at near-zero marginal cost, firms are feeling pressure to restructure their workforce and reduce inefficient process steps (see also the CDR Calculus by Wirtz et al. 2023).

Thus, this process has significant implications for the human service workforce and puts enormous pressure on them to demonstrate their value. This leads to the question of what contribution the human factor will make to the organization of the future and how companies should responsibly treat their employees in the face of such strong pressure to reduce costs based on AI advances. Can employees be upskilled to take on new tasks? Which tasks are future-proof and cannot be replaced by AI? What is the real value that the human touch can provide? Based on this discourse, the company must determine how to act responsibly to build the organization of the future.

The Vicious Cycle of the AI Economy

Achieving the performance of today’s generative AI applications requires huge data sets and massive computing power. A company must have sufficient resources to be at the forefront of this development. As a result, only the world’s largest technology companies can seriously compete and offer their AI systems as AI systems-as-a-service.

Building your own AI capabilities is expensive, and you cannot seriously compete with the capabilities of big tech companies. Without these big tech companies, achieving the successes we are currently seeing in AI would be difficult. However, working with big tech creates a strong dependency, which increases the more we rely on AI in our routine business processes. Moreover, by using Big Tech’s AI systems, their systems will learn even more and increase their advantage over independent providers.

This trend has already begun. For example, OpenAI introduced a GPT store that allows anyone to build their own customized AI system for whatever activity they are interested in. The introduction of this store instantly destroyed the business model of many AI service providers that offered specialized solutions based on OpenAI’s LLMs. It’s an important strategic and ethical question for companies if they want to contribute to making a few companies in a new industry too big to fail. 

The above questions are just a starting point for a new area of service management where we need to find new configurations for our company to continue to provide excellent service to our customers. This journey is a bit scary but also exciting. It’s great to be in the middle of a service revolution and to see firsthand what’s happening in real-time.

Werner Kunz
Professor of Marketing – University of Massachusetts Boston
Director of the digital media lab

Jochen Wirtz
Vice Dean MBA Programmes and Professor of Marketing at NUS Business School, National University of Singapore, Singapore,

References

Lobschat, L., Mueller, B., Eggers, F., Brandimarte, L., Diefenbach, S., Kroschke, M., & Wirtz, J. (2021). Corporate digital responsibility. Journal of Business Research, 122(July 2018), 875–888. https://doi.org/10.1016/j.jbusres.2019.10.006

Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R., Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A., Parkes, D. C., Pentland, A. ‘Sandy,’ … Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477–486. https://doi.org/10.1038/s41586-019-1138-y

Wirtz, J., Kunz, W. H., Hartley, N., & Tarbit, J. (2023). Corporate Digital Responsibility in Service Firms and Their Ecosystems. Journal of Service Research, 26(2), 173–190. https://doi.org/10.1177/10946705221130467

Comments

comments