How AI will Change the Cybersecurity Landscape in 2024

Last year, cybercriminals dramatically escalated the use of AI-generated cyber attacks, unleashing tactics like voice deep fakes and prompt injection to target individuals an...

Deen Hans
Deen Hans

Principal Software Engineer

Share this blog post:

Deimos Fallback Image

Last year, cybercriminals dramatically escalated the use of AI-generated cyber attacks, unleashing tactics like voice deep fakes and prompt injection to target individuals and businesses. Anticipating a significant surge in these tactics throughout 2024, especially targeting businesses lacking robust cybersecurity investments, we foresee an intensified threat landscape. Financial institutions, in particular, are highly susceptible to voice-deep fake attacks, especially phishing scams where callers pose as reputable sources to pilfer private information. At Deimos, our primary focus is collaborating with businesses tackling the escalating challenges in their industries. In this article, we highlight two of the most pressing issues we’re helping companies combat:

  1. Verifying the authenticity of communication sources
  2. Validating LLM inputs to mitigate potential threats

While the benefits of AI for organisations are undeniable, prioritising its controlled and secure implementation within the organisational framework is crucial. Establishing comprehensive policies is paramount to ensuring AI operates within its intended parameters, and sensitive customer information is not processed by Language Model Models (LLMs) hosted outside a business’s systems. Businesses must meticulously scrutinise and evaluate AI tools for productivity and feature development, ensuring data privacy maintenance and compliance with regional data protection regulations. As cybercriminals increasingly target vulnerable entities, the imperative for businesses to secure assets, private information, and overall reputation becomes even more pronounced.

Verifying the Authenticity of Communication Sources

The challenge of verifying the authenticity of communication sources has become increasingly complex. Instances of real-time voice-modulated deep fakes for impersonation, pre-recorded video deep fakes mimicking individuals, AI-enabled fake accounts orchestrating social media campaigns with artificial engagement to disrupt the public image of an organisation or individual, and sophisticated email and messaging phishing attempts utilising AI-generated text are on the rise.

Solution

Organisations must adopt robust measures to ensure the authenticity of communications. One straightforward yet effective approach involves incorporating a second medium of verification to confirm the identity of an employee or client. An example of this SMS two-factor authentication is when you receive a text message containing a one-time password used to access a network, system, or application. Despite the simplicity and efficacy of this method in thwarting potential deep fakes, it often goes overlooked by many businesses.

A broader awareness of deep fake voices and their generation methods is essential. Organisations should take proactive steps to shield themselves from inadvertently providing voice training data that could be exploited for deep fake creation. Typically, scammers collect such data through unsolicited calls, manipulating victims into answering specific questions and unwittingly providing voice samples. Verizon reports that 74% of 5199 data breaches include the human element, with people being involved either via error, privilege misuse, use of stolen credentials or social engineering. To minimise this risk, organisations are encouraged to opt for video-calling interactions whenever possible to further verify the identity of individuals within the organisation, thereby reducing the likelihood of successful impersonation. In cases where a potential deep fake interaction is suspected, it is imperative to verify the communication over a second medium, preferably in person if feasible.

Validating Large Language Model Inputs to Mitigate Potential Threats

Another cybersecurity challenge which will be critical to mitigate against in 2024, is the increasing exploitation of input validation. With more businesses leveraging advanced LLM models like OpenAI’s GPT-4, businesses must exercise caution in processing user input. Implementing moderation and employing security-focused language models are essential measures to detect malicious input and prevent prompt injection attempts.

Prompt injection attacks have gained prominence as a threat to AI-enabled applications and products. These attacks strategically aim to elicit unintended responses from large language-based tools, often by manipulating or injecting malicious content into prompts to exploit the system.

Solution

Customer-facing organisations should prioritise reinforcing their services with robust validation measures. Businesses utilising LLMs for customer interactions must ensure that their applications do not reveal internal details, such as specific training data and backend information. Models need protection against attackers attempting to coerce them into switching personas or accepting unauthorised rules and instructions beyond their intended purpose.

Developing resilient system prompts, coupled with an additional layer of user input message verification and scrutiny of LLM outputs, can significantly minimise the impact of prompt injection. By implementing these measures, organisations can bolster their defences against evolving cyber threats.

Final Thoughts

In the dynamic landscape of cybersecurity, the integration of AI into mainstream businesses has ushered in both unprecedented opportunities and challenges. The escalating use of AI-generated cyber attacks, exemplified by voice deep fakes and prompt injection, poses a significant threat to organisations, particularly those in the financial sector. As a cybersecurity firm operating in Africa, Deimos is at the forefront of addressing these challenges, collaborating with companies to fortify their defences and navigate the intricate terrain of AI security.

Looking ahead into 2024, the imperative for organisations lies in establishing controlled and secure AI adoption. Verifying communication sources, mitigating deep fake risks, and fortifying against prompt injection attacks are pivotal components of this strategy. By implementing vigilant policies, raising awareness, and embracing proactive measures, companies can safeguard their assets and maintain trust in an increasingly interconnected digital landscape.

To find out how we can help you with you security or AI requirements, click here.

Share this blog post via:

Share this blog post via:

or copy the link
https://deimos.io/post/how-ai-will-change-the-cybersecurity-landscape-in-2024
LET'S CHAT

Let one of our certified experts get in touch with you