
Companies are rushing to adopt Generative AI (GenAI). Generative AIs like OpenAI's ChatGPT, Anthropic's Claude, Google's Gemini, and Microsoft's Copilot are rapidly spreading among businesses and individual users. They play a crucial role in increasing work efficiency and creating innovative service models.
Our government is also transitioning to a new security framework called N2SF to utilize Generative AI. Public institutions must introduce Generative AI to innovate their operations.
According to a Gartner report, by 2024, 70% of enterprises are expected to have adopted or are considering adopting Generative AI (GenAI).
Generative AI stands at the heart of business innovation
Generative AI is not just a simple technology but is establishing itself as an innovative tool that changes the way companies operate. According to the report "The Great Acceleration: CIO Perspectives on Generative AI" published by MIT Technology Review Insights, CIOs (Chief Information Officers) see generative AI as a pivotal factor in spreading AI across business operations.

caption - Enterprise applications and use cases utilizing generative AI (Source: MIT Technology Review Insights)
According to McKinsey, generative AI is expected to create an economic value of 2.6 to 4.4 trillion dollars annually.
With the adoption of AI, it is expected that more than half of the tasks currently performed by humans will be automated between 2040 and 2060. AI is anticipated to not only replace jobs but also create new opportunities.
Generative AI excels in processing unstructured data. Companies can leverage this to uncover hidden data and create value. Therefore, it is essential to design data infrastructure to be flexible and scalable.
While many companies are utilizing external AI like ChatGPT and Google Gemini, they also have concerns about data protection and IP (intellectual property) issues. Some companies are pursuing the development of their own AI models, showing particular interest in building customized AI using open-source models such as LLaMA and Dolly.
Concerns about AI adoption
The primary concern for companies adopting generative AI is whether they can leverage it while protecting confidential information.
Generative AI brings tremendous efficiency improvements in areas such as automation, data analysis, security enhancement, and customer interaction. However, at the same time, it acts as a major factor in expanding new security threats and the surface for cyber attacks.
Not only are there risks and vulnerabilities with the AI models themselves, but the infrastructure supporting them is also expanding the attack surface. In particular, as many AI models and training datasets are released as open source, it creates an environment easily accessible not only to developers but also to malicious attackers.
AI Model Security Threats Realized
Some of these include over a hundred malicious models capable of injecting malware into user systems.
Hackers set up fake profiles impersonating the genealogy company 23AndMe to lure users into downloading corrupted models. These models had the ability to steal AWS passwords and were reported and removed only after being downloaded thousands of times.
In another instance, a vulnerability was found in the ChatGPT API. Researchers identified a strange code path where a single HTTP request generated two responses. They warned that if unresolved, the vulnerability could lead to data leaks, denial-of-service (DoS) attacks, and privilege escalation issues. Additionally, a vulnerability was discovered in the ChatGPT plugin that could potentially lead to account hijacking.
While open-source licenses and cloud computing are key drivers of AI development, they also increase security risks. Besides the security issues of AI models themselves, general infrastructure security problems, such as cloud configuration vulnerabilities or improper log monitoring, also pose significant threats.
AI models themselves are a new attack surface
Big Tech and AI startups have invested significant capital and manpower to develop AI models, but there is an increasing risk of them being stolen or reverse-engineered.
AI models may contain sensitive information, and if they fall into the hands of malicious organizations, important secrets could be exposed. When an AI chatbot learns a company's internal information, there is a high risk of confidential information being leaked due to external requests.
One of the most common model theft methods is a model extraction attack. An attacker exploits vulnerabilities in the API to access the model and can reverse-engineer it after gathering sufficient data from a black-box model (e.g., ChatGPT).
Most AI systems operate on a cloud-based platform. The cloud provides scalable data storage and computing power necessary for operating AI models. As accessibility increases, the attack surface also expands. Attackers are more likely to exploit vulnerabilities like incorrect access permission settings.
Companies providing AI models typically offer services through client applications like AI chatbots. If there is a feature to choose a usage model via an API, attackers may attempt to exploit it to access private models.
Hackers can employ techniques to manipulate AI models into learning incorrect data, resulting in the application of incorrect security policies. This disrupts AI security systems, preventing them from detecting attacks.
Security measures for AI adoption by enterprises
AI protection goes beyond simple functional security issues to include complex elements such as model protection, supply chain safety, and control of excessive autonomy.
Companies must prioritize security when developing or implementing AI. Continuous red team activities or proactive security testing are necessary to minimize threats.
As AI is adopted, granting high privileges to AI can increase security risks.
For example, what if an AI assistant could access all files on OneDrive to summarize Microsoft Teams meeting notes? This vulnerability significantly increases the likelihood of hackers exploiting it to steal confidential data.
Companies should inspect generative AI security both initially and regularly. Perform quarterly security tests to protect APIs and models. If using open source models, continue malware detection and detailed analysis. Do not overlook verifying cloud environments, containers, and network configurations where AI models operate.
Security Methods
✔ Implement AI Trust and Risk Management (AI TRiSM) Framework
📍 Validate the data AI processes, and continuously evaluate the security of AI models
📍 Enhance features for detecting anomalies in AI model behavior (Content Anomaly Detection)
✔ Strengthen AI Model and API Security
📍 Apply encryption and access control to data processed by AI models
📍 Conduct API vulnerability analysis and security testing
✔ Strengthen Cloud-based AI Security
📍 Apply cloud security policies when AI models operate in the cloud
📍 Ensure thorough access management for data repositories used by AI
✔ Establish AI Usage Policies and Governance
📍 Clearly define how AI interacts with external systems
📍 Enhance security awareness training for AI users
Popular Articles