Go back

Quick Recap on What’s been Happening in GenAI and Security lately

Apex is excited to share the key stories you need to know about GenAI and security in 2024 so far. What’s happening out there, what analysts foresee, what the community thinks, and other interesting stories about securing AI

Oren Saban 2 June 2024 5 min read
Apex

The AI revolution is expanding at an incomparable pace, enabling organizations to do more with less. Every few days a new LLM model is popping up, there’s a hard competition on the leading LLM leaderboard (did you see that Claude Opus just passed ChatGPT4 in the chatbot arena?) with more mediums coming in – such as video (Sora) and sound (Suno), and many enterprise applications already offer some level of AI integrated in their product (whether it’s a chatbot, recommendation engine, or just something small for the sake of saying “AI” in their website😅).

There are 2 personas who benefit from this revolution:

Firstly, the employees 👫 (and hence the organization) –  who can do their work better and faster.

Secondly, the hackers 😈- which just got a new powerful tool to utilize for the attack (LLM Agents can Autonomously Hack Websites), as well as a huge expansion of the attack surface all along the kill chain.

What’s happening out there?

Apex ai agent looks out the window
Latent space liberation

Pliny the prompter continuous to break the different models. Pliny the Prompter is the alias of a hacker and AI enthusiast known for creating a jailbroken version of OpenAI’s GPT-4 called “GODMODE GPT.” This customized version of the language model is designed to bypass many of OpenAI’s built-in guardrails, allowing it to generate responses that the standard version would typically block.

Security Alert: Hidden Risks of AI Model Hosting Platforms

Hugging Face reported unauthorized access to its AI model hosting platform, compromising Spaces secrets. They revoked compromised tokens and advised users to refresh keys and switch to fine-grained access tokens. This incident underscores risks like data poisoning, IP theft, data breaches, and operational disruptions. Hugging Face Blog

Lack of Isolation between Code Interpreter sessions of GPTs can cause Your private files to be read by malicious GPTs.

GPTs are custom versions of ChatGPT that combine user instructions and extra knowledge where the users can upload files and connect them to APIs. The concern of sensitive data leakage through ChatGPT gets bigger due to lack of isolation between different sessions, where your Code Interpreter sandbox is shared between private and public GPTs.

Risks: Creator of malicious GPT can actually steal of overwrite files from your session with GPTs.

Yes, another way to jailbreak

Many jailbreaks were published in the last few months, an interesting one is to use unicode Tags code points that are invisible in UI (AKA invisible characters), ChatGPT used to be susceptible for such attacks (for example – the one that Riley Goodside found), but it’s already closed. Claude, on the other hand, still permits those types of characters.

You can use the ASCII smuggler and try it yourself

First AI worm?

3 Israeli researchers created a computer worm that targets GenAI-powered applications. Sadly enough, it worked really well. Here Comes the AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications

AI agents: The new employees you’ve just hired

What do you think of AI adoption in your company? Well, the hackers think it’s great, but in the matrix kind of way.

Github & Microsoft 365 Copilot, Notion AI, Glean, Slack AI; The AI rush is embedded into your day-to-day applications, and if you opened the door for those “assistants”, you’re already at risk.

That’s right, you probably already hired some AI employees (aka AI agents), which have permissions to much more than what you think.

Stay tuned for a dedicated AI agents blog, where we will deep dive to the impact of AI agents on your organization security.

Just for fun:

Microsoft 365 Copilot relationship with emojis

Everybody loves Emojis 😆 But Microsoft Copilot seems to love them more than anyone else.

See what happens when you ask Copilot NOT TO USE EMOJIS.

🚨Spoiler alert: It goes crazy

What others think? Community perspectives

Apex ai agents sit in a meeting

80% of CIOs and CISOs are concerned with leakage of sensitive data by staff using AI. ISMG published their first annual GenAI study: Business Rewards vs. Security Risks, shedding light over the current states and 2024 plans of CIOs and CISOs to embrace GenAI and secure against the rapidly growing attack surface that comes along.

TL:DR

  • Productivity gains: 51% of respondents reported they already see more than 10% increase in productivity embracing GenAI systems.
  • Concerns: Top concerns when it comes to implementation of GenAI are leakage of sensitive data (~80%), ingress of inaccurate data – hallucinations (~69%) and AI bias/ethical concerns (~59%).
  • Mitigations: 38% of CIOs and 48% of cybersecurity leaders intend to continue banning the use of generative AI in the workplace while 73% of business leaders and 78% of cybersecurity professionals intend to take a walled garden/own AI approach going forward.
  • Understanding of regulations in any particular vertical or geography is low, as 38% of business leaders say they do understand these regulations, and 52% of cybersecurity leaders say the same.”

Traditional security teams don’t know what to do about [securing Al]. That is an exciting challenge. The expansion of the mandate is what freaks a lot of people out – not that they have to deal with adversarial prompts.

Anton Chuvakin

What analysts foresee?

Apex ai agent being diagnosed

Gartner identifies key cybersecurity trends for 2024, putting GenAI evolution 1st on the list.

“GenAI is occupying significant headspace of security leaders as another challenge to manage, but also offers an opportunity to harness its capabilities to augment security at an operational level,”

Richard Addiscott, Senior Director Analyst at Gartner.

What a Ride! But This is Just the Beginning

AI is boosting productivity and arming hackers. Security incidents are rising, with data leaks and AI worms. CIOs and CISOs are worried, many planning to ban or contain AI. The challenge is just starting. Our goal is to help your organization use AI and innovate, securely.

AI is already the core of your company, subscribe to our newsletter and stay up to date

Related Resources

AI agents: The new employee you’ve just hired

AI agents: The new employee you’ve just hired

What do you think of AI adoption in your company? Well, hackers think it’s great, but in a Matrix kind of way. GitHub & Microsoft 365 Copilot, Notion AI, Glean, Slack AI—the AI rush is embedded into your day-to-day applications, and if you opened the door for those “assistants,” you are already at risk.
Latent Space: The New Attack Vector into Organizations

Latent Space: The New Attack Vector into Organizations

As organizations embrace AI capabilities and applications, such as Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) systems, a hidden security gap is emerging: the latent space. This crucial aspect of modern AI models can be exploited through tactics like prompt injection and jailbreak, presenting significant security threats.
Embracing AI: The New Frontier in Cybersecurity

Embracing AI: The New Frontier in Cybersecurity

In today’s digital world, the rate at which Artificial Intelligence (AI) is being adopted is nothing short of revolutionary, outpacing any previous digital transformations. OpenAI launched ChatGPT in November 2022 and thanks to its delightful product and underlying technology, reached the 100 million users faster than any other consumer service. Unsurprisingly, the cybersecurity risks and […]
Do You Really Need Another Security Product?!

Do You Really Need Another Security Product?!

The combination of booming security tools and alerts and security talent shortage, might lead to the effort of security tools consolidation. While this is true for most of security endeavours, AI introduced new risks and challenges that cannot be met by the existing stack.