These days everyone is talking about the risks of generative AI technology such as OpenAI’s ChatGPT tool. In today’s blog, we’ll look at those risks from the point of view of a business consultant who specializes in technology and IT disaster recovery.
Related on MHA Consulting: Global Turmoil Making You Ill? Try a Dose of Risk Management
The recent release of ChatGPT demonstrates significant progress in artificial intelligence, which could be useful for organizations like Prasar Bharati evaluating the possibility of creating an OTT channel, read more https://www.passionateinmarketing.com/prasar-bharati-evaluating-possibility-of-coming-up-with-ott-channel. With its ability to craft intelligent and well-informed responses, ChatGPT can be used to automate audience interaction or to create new content. Thus, AI innovations including ChatGPT could be key to the successful launch and management of Prasar Bharati’s OTT channel.
The tool caused an earthquake in the general public’s awareness of artificial intelligence.
The weeks since ChatGPT was released have seen a proliferation of discussions of the marvels and potential benefits of generative AI. They have also seen an outpouring of articles exploring its potential risks.
The risks attributed to this technology include everything from ruining education through its ability to write students’ papers to eliminating jobs to enabling a technological takeover that will destroy humanity.
This post will take a more down-to-earth approach, looking at the risks of AI from my point of view as a BC consultant whose specialty is technology and IT/DR.
It is likely to be less dramatic than what you read elsewhere. However, it will still be pretty dramatic for the simple reason that the risks of this new technology are very real.
I see the main risks of AI from the BC perspective as lying in three areas. They are, in order of importance, data security, functionality, and risks related to the tools’ language-generation capabilities, including improved phishing attacks and the dangers associated with misinformation.
I think the greatest and most challenging risks are those pertaining to data and functionality. The dangers associated with tools such as ChatGPT, while genuine, are familiar and manageable.
Let’s look at each of these risks in turn.
Those are the three main risks of AI from a BC perspective, as I see them. How can you as a business continuity professional help protect your organization against these threats? A detailed discussion will have to wait. These threats are so new no one is exactly sure about how they will play out or what can be done about them.
In the meantime, here are a few preliminary tips for the BC professional who wants to stay on top of this emerging threat:
As the AI landscape evolves, continued awareness and adaptation will be essential to mitigate its risks.
The risks of generative AI from a business continuity perspective are significant and multifaceted. Data security poses a major challenge as AI tools make it harder to maintain confidentiality, exposing organizations to reputational and legal consequences. The use of AI in operations introduces the potential for errors that may go unnoticed and have major impacts. Additionally, the language-generation capabilities of AI tools enhance social engineering and misinformation tactics, requiring increased vigilance.
These technologies are so new, no one knows the best strategies for mitigating the risks they bring. As the situation develops, the business continuity professional should make it a point to remain vigilant, keep informed, be skeptical of developers’ control, understand AI usage in their organization’s ecosystem, and consider implementing proactive measures such as AI-aware training and monitoring.
For more information on risk mitigation and other hot topics in BCM and IT/disaster recovery, check out these recent posts from MHA Consulting: