Addressing AI-Driven Threats to Critical Infrastructure

The advent of AI is transforming the nature of cybersecurity threats but also opening new potential opportunities for Taiwan.

With its critical role in the global supply chain and its international standing persistently challenged by a hostile authoritarian neighbor, Taiwan has become an increasingly frequent target of cyber attacks in recent years. Many of these assaults involve attempts to compromise critical infrastructure by malign state actors – principally China. 

A report issued by the National Security Bureau (NSB) in early January showed that cyber attacks on Taiwan’s Government Service Network, upon which both central and local government agencies depend, had reached a daily average of 2.4 million in 2024. This number was double that in 2023, with most of these attacks designed to coincide with Beijing’s military drills off the coast of Taiwan.

Many of the attempted breaches were Distributed Denial-of-Service (DDoS) attacks, which involved efforts to take the websites of Taiwan’s government departments offline. The main targets were the telecommunications, defense, and transportation sectors, the report concluded. 

The NSB findings followed a November 2024 report showing that Taiwan experienced the highest number of cyberattacks among Asia-Pacific countries during the third quarter of the year. The data, which indicated that in addition to government agencies and the military, private sector manufacturing and hardware vendors were also prominent targets, was provided by American-Israeli cybersecurity provider Check Point. Some sources go further, suggesting that Taiwan has suffered more cyber attacks than any other country.

A further disturbing element in the equation has been an uptick in the use of AI to facilitate cyber attacks. A 2024 research collaboration from Microsoft and OpenAI revealed that AI has helped threat actors increase the speed and scale of their activities.

At present, the impact is mainly seen in generative AI, which is used to enhance existing methods and tools employed by hackers, including social engineering activities and deepfakes. Regarding the former, the Microsoft-OpenAI analysis of Large Language Models (LLMs) such as ChatGPT and Gemini “revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape.”

Aside from LLMs, which can be leveraged to craft more convincing scripts and refine targeting techniques in phishing email scams, AI-driven natural language generation (NLG) tools are increasingly being used to mimic human communication, making fraudulent messages more persuasive and harder to detect. A further use of AI in social engineering is to enable behavioral analysis and psychological profiling though application programming interfaces (APIs) that analyze a user’s digital footprint.

While the Microsoft and OpenAI investigation did not turn up “particularly novel or unique AI-enabled attack or abuse techniques,” the report noted the necessity for continued scrutiny of this rapidly developing field.

For most observers, LLMs and related technology, which rely on machine learning and neural networks, will soon be outmoded as an effective tool for cyber attackers and, in turn, cybersecurity defenders. Instead, the focus will increasingly be on artificial general intelligence (AGI).

“LLMs are good for efficient pattern recognition – the very basics of AI,” says Jordi Vallejo, head of engineering at Inpixon, a U.S.-headquartered company specializing in Real-Time Location Systems (RTLS) and Industrial Internet of Things (IIoT). “But this just 1% of what AI will be in the future.”

Among the challenges that AGI seeks to resolve are a lack of understanding by LLMs of the data they are generating and organizing, an inability to generalize between different domains, poor long-term memory (meaning they don’t discover connections between and with older data), and a lack of causal reasoning.

In this sense, LLMs “lack some of the basic elements that we might consider necessary to call something AI,” says Inpixon’s Vallejo. Another issue is “an absence of self-improvement.” Vallejo observes that large language models such as Meta’s Llama have already read the entire contents of the internet. “What we really need to get to is AGI that directly interacts with embodiments and improves suggestions from its own observations.”

Currently based in Taipei, Vallejo is actively analyzing projects to establish AI data centers in Australia and Paraguay, aiming to later replicate the model elsewhere. The centers are equipped to handle the intense computational demands of AI workloads. In both countries, state support plays a role. 

“All governments are very eager to have their own AI facilities,” says Vallejo. “Some of this is hype, but they want to train their models inside the country because some of this is critical information.”

With the threats that it currently faces and its advantages in hardware and electronics, Taiwan should consider following such a path, says Vallejo. “Instead of just selling the electronics that are used in data centers, they could have [data centers] here in Taiwan, providing AI services for other countries,” he says.

Having raised this possibility in discussions with Taiwanese officials, Vallejo also notes that sustainable solutions such as geothermal power could be harnessed to help provide the required energy. The computing power of the centers could also be sold to third parties. Acknowledging the risk to return on investment (ROI) on such an undertaking, he suggests a subscription model is a good option.

“If the amount of services you can provide in a few years is just 10% [of capacity] and your investment becomes obsolete, then you need to be careful,” he says. “But if Nvidia, for example, were to lease the equipment, so you get everything renewed every few years, it would help both the company and centers calculate ROI.”

Most importantly, Taiwan would be in control of the language models being used, Vallejo says, ensuring greater cybersecurity for the country. “For government or military data, you obviously can’t outsource it,” he says. However, “with Nvidia and TSMC doing all the machines and hardware required, you can always be on the edge.”

Others see obstacles to such developments.  “If you want to make Taiwan a data hub, the question is ‘Why?’” says Benson Wu (吳明蔚), CEO and cofounder of Cycraft, a cybersecurity company that  focuses on integrating autonomous AI technology. “And if you’re asking that, then they [who create the data] will reject such an idea.”

For AI organizations such as OpenAI, which is best known for the ChatGPT chatbot, there will be little incentive to for Taiwan to host their data centers. “Even if it were possible, you couldn’t touch their data,” says Bright Wu, a cybersecurity committee member at SEMI Taiwan – the local branch of the leading international microelectronics industry association. “So, you wouldn’t learn anything from it.”

While this statement is true, Vallejo argues that data is not the only thing to be gleaned from running such centers. “You would also learn about the type of resources and learning curve of newer models, which is key for LLMs, and changes of architecture required for new models,” he says.

From the perspective of supply chains, hardware, and upstream key components, Taiwan is the ideal location, says Julian Chu, an industry consultant and director at the semi-governmental Market Intelligence & Consulting Institute (MIC). “You always want to get data closer to the center or vice-versa,” says Chu. However, there are barriers to this, he says, such as the as the European Union’s General Data Protection Regulation, which states that EU members’ data must remain within the union.

Taiwan’s precarious geopolitical status is also a factor, says Chu. “On top of cyberattacks, we need to consider physical warfare,” he says. “This is an obvious concern to other countries.”  

Either way, with its location “on the cyber frontline,” as SEMI’s Wu calls it, most observers agree that Taiwan can play a valuable role as a host and provider of AI data-derived cyber threat intelligence. 

“As this data comes from here [the cyber frontline], it makes sense for other countries to leverage that,” says Wu. “In a sense, we manufacture cyberthreat intelligence – it’s one of our natural resources.”

Cycraft’s solutions include a threat exposure management platform called XCockpit, which helps reduce clients’ attack surface – the number of vulnerable points, or attack vectors, that can be exploited by unauthorized users. The company’s technology has been employed by government agencies, including the Ministry of Economic Affairs (MOEA), following high-profile data breaches and system failures for which the MOEA was the competent authority. One such incident occurred in 2020 when the state-owned CPC Corporation, Taiwan (the former Chinese Petroleum Corp.) experienced a major cyber attack that compromised over 7,500 computers.

Cycraft has helped “empower auditors to be scalable in assessing compliance gaps, providing compliance recommendations and detecting early warnings of known exploited vulnerabilities (KEVs),” says Wu. 

Rather than promoting a single product or a one-size-fits-all approach, Wu emphasizes the importance of a constantly monitored and updated checklist of KEVs. While some of this can be “offloaded to current AI,” next generation tech will inevitably be required for novel threats.  

“It’s a super-long list, whether it’s hardware, software, people, or user accounts” says Wu. “The critical infrastructure is a very complex environment because the whole nation is using it.” 

Wu highlights eight areas identified in the Executive Yuan’s Guidance on National Critical Infrastructure Security Defense: energy, water resources, telecommunications, transportation, banking and finance, emergency aid and hospitals, central and local governments, and high-tech parks.

In terms of cybersecurity, the physical devices, stations, satellites, and cables of telecommunications operators are key elements needing protection, says Wu. However, it doesn’t stop there. “What about the facilities that are being distributed to end-user sites?” he asks. “In your home, you have a home router that interacts with the telco (telecommunications) provider. Who takes care of that? The answer is nobody.”

With so many points that nobody is responsible for, adversaries can easily find and target weak links, says Wu. The only way to combat this shortcoming is to regularly review cyber assets, he says, and that’s where AI comes in. “It helps you do the things on your list that you only have 24 hours in the day and limited people for,” Wu says. “But for AI to work, it needs to know what you want to do.”

More fundamentally, says Wu, AI needs to know that you want to do anything at all. He suggests that far too many telco professionals maintain a reactive stance, based on the mistaken belief that external signs of a system’s health obviate the need for rigorous monitoring. 

“If the boss doesn’t think there’s anything wrong, there’s no need for AI,” says Wu. “He might say: ‘It’s not broken, there’s no alert, it’s been up and running for 30 years already. Why would we think there’s malware in that station?’ If this mindset doesn’t change, then there’s no need for AI.”

The need for a paradigm shift is a recurring theme in conversations with AI analysts and industry professionals. Wu affirms the need for vigilance across the board. He points out that operators are always a step behind hackers in identifying vulnerabilities. With telcos paying little attention to AI, Wu warns, this gap is only set to widen. Part of the problem is a reactive attitude by the government, based on an assumption that all is well.

“If something goes wrong, they blame and fine the operators,” says Wu. “But when we talk about resilience, it’s not only about protection, but how to recover and mitigate losses as soon as possible.”

He cites the release of the Cybersecurity Framework 2.0 by the National Institute of Standards and Technology in the United States last year as an example of the change in focus. As the first update to the framework in a decade, the CSF 2.0 has expanded beyond its original remit of critical infrastructure to include all organizations, with an added emphasis on governance. 

In contrast, despite minor amendments in 2023, Taiwan’s Cyber Security Management Act still does not cover private enterprises. Indeed, there are still no formalized cybersecurity requirements for nongovernmental entities. Likewise, for all the talk of building resilience, the 2025-2028 national science and technology development plan, which will be based on discussions at the 12th National Science and Technology Conference held in December 2024, “still only covers cyber defense technology,” Wu notes. “They are still looking at how to establish a risk management framework and principles.”

MIC’s Chu also stresses the need to learn from and align with international norms. He cites ISO 42001 (the world’s first AI management system standard, which was introduced in 2023), the EU’s AI Act, and various federal and state guidelines in the United States as influences on the draft of an AI Basic Act by the National Science and Technology Council last year. However, he advocates an approach that recognizes Taiwan’s unique circumstances.  

“We’re talking about AI sovereignty,” says Chu. “We need own culture, our own language, and our own say in the AI era.”