
AI Era and Breaches
Today, many companies are dreaming of innovation through the adoption of Artificial Intelligence (AI). It is natural to have high expectations for the positive changes AI will bring, such as improved productivity and the creation of new business models. However, it is often overlooked that adopting AI can also introduce new security risks.
As recent corporate breaches have revealed, a paradigm shift that is difficult to address with existing security systems has already begun. This change is clearly illustrated by the final investigation report released on July 4th by the Ministry of Science and ICT regarding the breach at Company S.
According to the joint public-private investigation team, the breach involved the exposure of a total of 25 types of information, including SIM identification information, amounting to approximately 9.82 GB and 26.96 million records. This is considered a significant incident that shakes the trust in the nation's telecommunications network-based services, moving beyond a simple personal data leak. What's more concerning is that the breach revealed that due to the sophistication and stealthiness of the intrusion method, it is challenging to address such issues with existing inspection systems solely.
As such, cyber attacks are becoming more precise and are expected to pose the risk of incurring significant costs in the future. According to the Cost of Data Breach (CODB) report, the global average cost of recovering from data breaches in 2023 is $4.45 million, an increase of 15% over the past three years. This suggests that companies need to quickly prepare against new security threats.
Advancement of attacks in the AI era
AI Attack Cases
The rise of generative AI has brought innovation to the corporate work environment, but it also exposes far more assets and connection points (such as APIs, models, and training data) to the outside than traditional IT infrastructure.
In the most frequent cases, as employees use generative AI in their work, incidents of leaking trade secrets or violating customer information may occur, leading to the exposure of sensitive data. Inadvertently inputting information into AI models while summarizing sensitive project reports or analyzing customer data can risk leaking this data externally or being used for AI model training.
Moreover, the company's own AI security system is becoming a new attack target. Attackers exploit differences between AI systems to manipulate communication between AIs or confuse the security system's AI, potentially disabling normal defense functions. This implies that AI's learning abilities and autonomy could be misused to maximize the extent of security breaches.
Ultimately, AI becomes a powerful tool for companies and a very dangerous double-edged sword at the same time. In an AI environment like this, even minor exposures can result in catastrophic consequences, so businesses need to be aware of the expanded attack surface alongside their AI adoption.
3 Reasons AI Environments Complicate Security
In comparison to existing security systems, the AI environment becomes much more complex and vulnerable due to the following structural reasons.
① Complex Connection Structure Between Assets
The scope of management, including AI models, APIs, cloud environments, various storage facilities, and numerous connecting points, has become much more extensive than before. This complexity makes it difficult for security teams to understand and manage the vulnerabilities of the overall system.
② Omitted Security Checks
Many companies are keen to adopt AI competitively in line with its rapid technological advancements. During the process of quickly implementing features in the Proof of Concept (PoC) stage and transitioning to the operational environment, sufficient security checks are often omitted or given lower priority. As features are rapidly added, basic authentication and access control settings can be inadequate, and security patches may not be properly conducted, violating fundamental security principles and resulting in serious vulnerabilities.
③ Increase in Security Blind Spots
AI systems include far more components than traditional IT infrastructure, and the assets added through AI adoption have a high potential of being overlooked by security systems. Traditional inspection methods focusing only on operational environments capture only a part of the externally exposed assets, while blind spots such as development environments, staging systems, and automatically generated files are easily neglected. Moreover, due to the nature of AI work, departments frequently test external APIs or open source materials, making it difficult to even ascertain who is using what.
Thus, while the AI environment excels in flexibility and scalability, security control becomes challenging due to asset fragmentation and temporariness, providing opportunities for attackers. Beyond merely protecting the system, accurately identifying and managing what assets our company possesses and where they are exposed is the first step in AI security.
Basic Strategies for AI Hacking Security
Maintaining the Attacker's Perspective for Incident Prevention
Many companies think they are maintaining security by conducting regular inspections once or twice a year, and deploying various security solutions such as SIEM and EDR. However, attackers are constantly looking for gaps in the company 365 days a year. From malware infection to privilege escalation, internal penetration, and data leakage, all these processes can stealthily progress over a period of months to years.
Ultimately, the important thing is whether you can proactively respond before vulnerabilities are actually exploited. To make this possible in practice, what is needed is not merely smart technology, but a persistent response process. In other words, there is a need for a strategy that continuously analyzes systems from the attacker's perspective, directly penetrates to address blind spots, and enhances security.
Key strategies to prevent AI hacking
Comprehensive Endpoint and Asset Detection
To continuously enhance security, integrated checks on existing assets are necessary. Knowing exactly what structure internal assets have and what threats they may be exposed to is important.
Accurate Asset Identification: As AI adoption increases assets, security teams must decide 'where to look first.' After identifying assets based on IP/domain, they should filter genuine threats based on sophisticated asset reliability assessments to reduce resource use and focus on real threats.
Vulnerable Asset Map Construction: Visual mapping of asset connections to predict the spread of risks from vulnerable assets.
Initial Threat Modeling: Analyze the structure and flow of AI systems before deployment, and derive potential threat scenarios. This goes beyond simply understanding the system, predicting where penetration is possible and forming the basis for responses.
Visualization of Vulnerabilities and Issue Response
When vulnerabilities are found within the company, if all security personnel aren't aware simultaneously, response speeds decrease, and threats may accumulate. While technical responses are important, without a clear sharing system on 'who took what action and when,' the same issues tend to recur or might be overlooked.
AI-Specific Vulnerability Identification: Check for potential real threats like authentication bypass, session hijacking, and malicious data injection into models, identifying hidden high-risk vulnerabilities. Discovered vulnerabilities should be classified by risk level (critical, high, medium, low) to set priorities and develop corresponding response strategies.
Status Sharing Report: Sharing reports on rankings of vulnerable assets, frequently found types, etc., to inform prioritization and response.
Establish Internal Communication System: Relevant staff should share updates in real-time regarding security issues, accumulate history, and maintain a communication channel to prevent breaches.
Conduct Penetration Testing from an Attacker's Perspective
Automated vulnerability scanning or internal audits may miss complex attack scenarios. Particularly, social engineering, insider threats, and multi-stage evasive attacks are difficult to counter with purely segmented checks. What is needed now is to view the system as a hacker does and realistically simulate vulnerable pathways.
White Hat Hacker-Based Red Team Testing: Conduct simulation hacking that replicates the techniques attackers use when actually infiltrating systems. Test the entire process from external penetration → privilege escalation → internal spread → data exfiltration as if it's real.
Scenario-Based Evaluation: Utilize the MITRE ATT&CK framework to visualize attack techniques and tactics related to AI systems and design custom scenarios tailored to the corporate environment.
Develop Response Strategies: Based on test results, improve not only vulnerability measures but also prevention and response systems to avoid recurrence.
For the more advanced IT environment brought by AI, security must become smarter.
To prevent additional breaches, a system that looks through the perspective of the attacker and continuously checks and improves is necessary.
Proactive security fitting the AI era, it’s time to begin.
Popular Articles