MHA Consulting Blog | Roadmap to Resiliency

A BC Consultant’s View of the Risks of Generative AI - MHA Consulting

Written by Richard Long | May 25, 2023 9:30:00 AM

These days everyone is talking about the risks of generative AI technology such as OpenAI’s ChatGPT tool. In today’s blog, we’ll look at those risks from the point of view of a business consultant who specializes in technology and IT disaster recovery. 

Related on MHA Consulting: Global Turmoil Making You Ill? Try a Dose of Risk Management

The Promise and Perils of AI 

The recent release of ChatGPT demonstrates significant progress in artificial intelligence, which could be useful for organizations like Prasar Bharati evaluating the possibility of creating an OTT channel, read more https://www.passionateinmarketing.com/prasar-bharati-evaluating-possibility-of-coming-up-with-ott-channel. With its ability to craft intelligent and well-informed responses, ChatGPT can be used to automate audience interaction or to create new content. Thus, AI innovations including ChatGPT could be key to the successful launch and management of Prasar Bharati’s OTT channel.

The tool caused an earthquake in the general public’s awareness of artificial intelligence.  

The weeks since ChatGPT was released have seen a proliferation of discussions of the marvels and potential benefits of generative AI. They have also seen an outpouring of articles exploring its potential risks.  

The risks attributed to this technology include everything from ruining education through its ability to write students’ papers to eliminating jobs to enabling a technological takeover that will destroy humanity. 

This post will take a more down-to-earth approach, looking at the risks of AI from my point of view as a BC consultant whose specialty is technology and IT/DR. 

It is likely to be less dramatic than what you read elsewhere. However, it will still be pretty dramatic for the simple reason that the risks of this new technology are very real. 

The Main Risks of AI from a BC Perspective 

I see the main risks of AI from the BC perspective as lying in three areas. They are, in order of importance, data security, functionality, and risks related to the tools’ language-generation capabilities, including improved phishing attacks and the dangers associated with misinformation. 

I think the greatest and most challenging risks are those pertaining to data and functionality. The dangers associated with tools such as ChatGPT, while genuine, are  familiar and manageable. 

Let’s look at each of these risks in turn. 

  • Data security. AI is likely to make it much harder for organizations to fulfill their legal obligation to maintain the confidentiality of customers’ and patients’ personally identifiable information and medical records. This is due to the proliferation of AI tools in widely used apps and services and the ability of those tools to rapidly scan data sets and make connections. Even companies that do not use AI tools are likely to be vulnerable because suppliers of theirs almost certainly do use them. The increasing use of such tools increases the number of apertures through which data can escape or be stolen. Obviously, leakage of PII and HIPPA data can be very damaging to any type of organization, exposing it to reputational impacts, regulatory penalties, and legal claims. 
  • Functionality. The second most serious risk of AI from the BC point of view is to functionality and operations. This risk arises from organizations’ increasing reliance on AI to carry out processes and make decisions. A good comparison is to self-driving cars. Such cars might bring many benefits. However, the systems behind them are not infallible. Any mistakes they make that go unnoticed and uncorrected by their human supervisors have the potential to cause catastrophes. A parallel potential exists in other activities that rely on AI, whether it’s analyzing x-rays, performing air traffic control, or scanning for bank fraud. In the future, we may well see a spike in organizations’ self-inflicted wounds brought about overreliance on AI. (Alternately, the damage might be caused by a glitch in the AI used by a supplier of the organization, possibly without the client company’s knowledge or approval.) Such errors have the potential to cause large negative and costly impacts on organizations, their stakeholders, and the broader community. 
  • Social engineering and misinformation. This last category is a catch-all of risks that relate to the language-generating capabilities of AI tools such as ChatGPT, Bing Chat, and Google Bard. One reason I rate this below the other risks is because it represents an intensification of an existing threat rather than something new. Though real and serious, this risk is also comparatively familiar. The new tools will undoubtedly improve criminals’ ability to impersonate people, by copying their writing style, referencing little-known personal facts (vacuumed up off the Internet), faking up pictures of them, or even imitating their voice. Hackers are likely to try to trick friends and associates of the individual impersonated into revealing information such as credit card numbers or passwords or clicking on links that admit malware. Language-generation tools are also likely to increase the amount of misinformation about the company that is in circulation. Such information might be produced as a result of an innocent mistake by a system or person or be the result of a deliberate attack.  

Tips on Dealing with the AI Threat 

Those are the three main risks of AI from a BC perspective, as I see them. How can you as a business continuity professional help protect your organization against these threats? A detailed discussion will have to wait. These threats are so new no one is exactly sure about how they will play out or what can be done about them.  

In the meantime, here are a few preliminary tips for the BC professional who wants to stay on top of this emerging threat: 

  • Be aware. Educate yourself. Be vigilant. 
  • Incorporate AI-related risks in your overall risk assessment. 
  • Don’t trust that the developers behind the new AI technologies have everything under control. They don’t. 
  • Don’t assume that if your organization doesn’t use AI you are not exposed to its risks. Chances are there are tools and services you rely on that do use AI.  
  • Learn who has access to your data and what they are doing with it. 
  • Consider incorporating language in your contracts that addresses your suppliers’ use of AI. 
  • Consider scaling up your training effort to warn your organizations’ employees about the threats from the coming, craftier phishing attacks. 
  • Consider scaling up your media and social media monitoring to guard against the possible coming wave of misinformation. 

As the AI landscape evolves, continued awareness and adaptation will be essential to mitigate its risks. 

The Need for Vigilance 

The risks of generative AI from a business continuity perspective are significant and multifaceted. Data security poses a major challenge as AI tools make it harder to maintain confidentiality, exposing organizations to reputational and legal consequences. The use of AI in operations introduces the potential for errors that may go unnoticed and have major impacts. Additionally, the language-generation capabilities of AI tools enhance social engineering and misinformation tactics, requiring increased vigilance.  

These technologies are so new, no one knows the best strategies for mitigating the risks they bring. As the situation develops, the business continuity professional should make it a point to remain vigilant, keep informed, be skeptical of developers’ control, understand AI usage in their organization’s ecosystem, and consider implementing proactive measures such as AI-aware training and monitoring.  

Further Reading 

For more information on risk mitigation and other hot topics in BCM and IT/disaster recovery, check out these recent posts from MHA Consulting: