XploitSQL: Advancing Adversarial SQL Injection Attack Generation with Language Models and Reinforcement Learning
Abstract
SQL injection (SQLi) compromises database-driven applications by enabling attackers to insert malicious SQL commands via input fields, potentially leading to unauthorized access, data manipulation, or system compromise. In recent years, alongside the development of various rule-based Web Application Firewalls (WAFs) aimed at mitigating SQL injection attacks, there has also been a notable rise in the utilization of machine learning and deep learning techniques to address this issue. Although significant progress has been made in these studies, detecting and mitigating SQLi-related attacks continues to present a significant challenge. A crucial factor contributing to the lack of extensive SQLi detection solutions is the absence of a comprehensive testing methodology. In this work, we introduce XploitSQL-an innovative approach to advance adversarial SQL injection generation by leveraging language models and reinforcement learning. Our model is trained to produce evasive SQLi samples, enhancing the robustness of SQLi detection models and offering opportunities for more comprehensive detection strategies. To assess the efficacy of the proposed method, we employed state-of-the-art SQL injection detection models in conjunction with commercially available web-based firewalls. Across all tested detection models, detection rates declined when faced with evasive samples generated by XploitSQL. Furthermore, our model outperforms existing methods for generating attack samples. © 2024 ACM.

