Enhancing Test Suite Optimization in Software Engineering through a Hybrid Whale Optimization and LSTM Approach

Shuqing Qiao

Abstract


The algorithms used for Meta are commonly utilized to solve complicated optimization issues, but they frequently have disadvantages such as premature convergence, poor examination in the initial phases of the search, and insufficient adaptation of search operators. To address these issues, this study presents a novel hybrid approach, the Whale Optimization Algorithm with Long Short-Term Memory (WOA-LSTM), which is intended to improve exploration and adaptive learning. The primary goal is to overcome the performance limits of traditional WOA by providing a more robust search approach for confined combinatorial test-generating tasks. The proposed WOA-LSTM combines the regular WOA with an LSTM-based probability table that dynamically selects search operators. Data normalization with Z-score standardization resulted in stable feature scaling for effective LSTM training. In the early iteration stages, low-performing operators are given a chance to re-enter the search, preventing premature stagnation. Furthermore, the model selects just-in-time adaptive operators among the three WOA mechanisms random generation, spiral shape, and shrinkage—based on their historical effectiveness, allowing for a balance of exploration and exploitation. The experimental results show that WOA-LSTM outperforms other algorithms, with an accuracy of 99.3% and a recall of 99.1%. Comparative examination shows that the suggested method outperforms traditional WOA while being competitive with other sophisticated metaheuristic algorithms. Furthermore, statistical validation with t-tests demonstrates WOA-LSTM's considerable superiority over existing techniques, ensuring the proposed model's performance is reliable, resilient, and generalizable across several experimental runs. Finally, WOA-LSTM presents a highly successful optimization framework that overcomes WOA's drawbacks while also pointing the way forward for future optimization research.


Full Text:

PDF


DOI: https://doi.org/10.31449/inf.v49i19.9646

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.