British cyber experts warn of risks using AI chatbots

2023-09-06
| wicinternet.org

share

British officials are warning organizations about integrating artificial intelligence-driven chatbots into their businesses, as research has increasingly shown that they can be tricked into performing harmful tasks, Reuters reported on Aug 30.

In a pair of blog posts, the UK's National Cyber Security Centre (NCSC) said that experts are still studying the potential security risks tied to algorithms that can generate human-like interactions, also known as large language models (LLMs).

The AI-powered tools are currently being used as chatbots, with some companies envisioning them replacing internet searches as well as customer service and sales calls.

The NCSC stated this could carry risks, especially if such models are plugged into other elements of the organizations’ business processes.

Academics and researchers have found ways to subvert chatbots by feeding them rogue commands or fool them into circumventing their built-in guardrails.

For example, if a hacker structured their specific input just right, a bank's AI chatbot might be manipulated into executing an unauthorized transaction.

The NCSC said in one of its blog posts: "Organizations building services that use LLMs need to be careful in the same way they would be if they were using a product or code library that was in beta. They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it yet. Similar caution should apply to LLMs."