Evaluating and Securing Power Systems against Vulnerabilities Introduced by Large Language Models

Abstract

An exciting new direction for improving operational efficiency and decision-making is the use of Large Language Modeling (LLMs) to contemporary power systems. Nevertheless, there may be unanticipated security risks associated with this move. Using LLMs to power networks may pose certain risks, which this paper examines. It stresses the need of doing research and developing remedies immediately. It is a challenging but vital job to secure large language models in a power monitoring context. Through the implementation of thorough security measures, the promotion of a security-conscious culture, and the continuous monitoring of new threats while technologies, we may maximize the benefits of LLMs while minimizing their hazards. It is our duty as information security experts to pioneer this new field and make sure that our security protocols adapt to the increasing sophistication of our AI systems. Security flaws in LLM that allow rapid injection attacks are among the most critical ones. These types of attacks take advantage of LLMs' fundamental features by deliberately feeding them data that will cause them to operate in an unexpected way or leak private information. Many LLMs have seen extensive usage with the introduction of commercially accessible systems like ChatGPT. One crucial area of cybersecurity that is seeing a rise in the use of LLMs is power monitoring systems. It is critical to safeguard these systems against cyber-attacks since they are essential for the stability of society and the nation's energy supply. In order to keep these systems resilient and reliable, it is essential to detect unexpected vulnerabilities, especially zero-day attacks. One potential way to improve these detection capabilities is via LLMs. It integrates power-system standards and threat intelligence with traditional anomaly detection, LLM-assisted reasoning over code/configs/logs, and protocol-aware telemetry. In order to uncover undisclosed power monitoring system weaknesses, we used models with LSTM with GRU. Since these models are masters of sequential data analysis, they are ideal for this job. Sequential data includes things like sensor readings, system logs, and network traffic. By identifying unusual activity that differs from typical system operation, LSTMs and GRUs are able to discover new, "zero-day" vulnerabilities, in contrast to conventional security technologies that depend on predetermined attack signatures. Here, we built an LLM model using TinyLlama Chat 1.1. The LLM takes the processed packet data and the extracted context as input and outputs a user-friendly summary of the packet file. Through the use of machine learning models, the program provides a concise, well-organized, and straightforward overview of the network's operations.

Authors

  • Manpo Li Northeast Branch of State Grid Corporation of China, Shenyang 110180, Liaoning, China
  • Xuerui Yang Northeast Branch of State Grid Corporation of China, Shenyang 110180, Liaoning, China
  • Xiaochen Yang Northeast Branch of State Grid Corporation of China, Shenyang 110180, Liaoning, China
  • Shugui Zhang Sichuan Energy Internet Research InstituteTsinghua University, Chengdu 610218, Sichuan, China

DOI:

https://doi.org/10.31449/inf.v50i11.9604

Downloads

Published

04/23/2026

How to Cite

Li, M., Yang, X., Yang, X., & Zhang, S. (2026). Evaluating and Securing Power Systems against Vulnerabilities Introduced by Large Language Models. Informatica, 50(11). https://doi.org/10.31449/inf.v50i11.9604