Recent research presented at the RSA Conference indicates that malicious prompts targeting Large Language Models (LLMs) can be detected with about 95% accuracy. This finding underscores the critical role of Nvidia technology, which supports sub-millisecond inference times, making it suitable for real-time applications as enterprises adopt generative AI.
As generative AI use rises, with Gartner predicting that over 80% of organizations will implement generative AI solutions this year, the security landscape is shifting. Upwind warns that natural language has become a new vulnerability, diverging from traditional threats that rely on exploiting code flaws. New risks, such as prompt injection and social engineering, are emerging as LLMs are integrated into enterprise processes.
Mose Hassan, VP of Research & Innovation at Upwind, stated that the nature of LLMs requires a transformation in security approaches, emphasizing the need to prevent threats that manipulate language. Upwind has created a three-stage architecture tailored to production environments, focusing on traffic identification, threat detection, and ensuring minimal latency and false positives while maintaining effectiveness.