0
Can Large Language Models Automate Phishing Warning Explanations? A Controlled Experiment on Effectiveness and User Perception
arXiv:2507.07916v2 Announce Type: replace
Abstract: Phishing has become a prominent risk in modern cybersecurity, often used to bypass technological defences by exploiting predictable human behaviour. Warning dialogues are a standard mitigation measure, but the lack of explanatory clarity and static content limits their effectiveness. In this paper, we report on our research to assess the capacity of Large Language Models (LLMs) to generate clear, concise, and scalable explanations for phishing warnings. We carried out a large-scale between-subjects user study (N = 750) to compare the influence of warning dialogues supplemented with manually generated explanations against those generated by two LLMs, Claude 3.5 Sonnet and Llama 3.3 70B. We investigated two explanatory styles (feature-based and counterfactual) for their effects on behavioural metrics (click-through rate) and perceptual outcomes (e.g., trust, risk, clarity). The results provide empirical evidence that LLM-generated explanations achieve a level of protection statistically comparable to expert-crafted messages, effectively automating a high-cost task. While Claude 3.5 Sonnet showed a trend towards reducing click-through rates compared to manual baselines, Llama 3.3, despite being perceived as clearer, did not yield the same behavioral benefits. Feature-based explanations were more effective for genuine phishing attempts, whereas counterfactual explanations diminished false-positive rates. Other variables, such as workload, gender, and prior familiarity with warning dialogues, significantly moderated the effectiveness of warnings. These results indicate that LLMs can be used to automatically build explanations for warning users against phishing, and that such solutions are scalable, adaptive, and consistent with human-centred values.
Abstract: Phishing has become a prominent risk in modern cybersecurity, often used to bypass technological defences by exploiting predictable human behaviour. Warning dialogues are a standard mitigation measure, but the lack of explanatory clarity and static content limits their effectiveness. In this paper, we report on our research to assess the capacity of Large Language Models (LLMs) to generate clear, concise, and scalable explanations for phishing warnings. We carried out a large-scale between-subjects user study (N = 750) to compare the influence of warning dialogues supplemented with manually generated explanations against those generated by two LLMs, Claude 3.5 Sonnet and Llama 3.3 70B. We investigated two explanatory styles (feature-based and counterfactual) for their effects on behavioural metrics (click-through rate) and perceptual outcomes (e.g., trust, risk, clarity). The results provide empirical evidence that LLM-generated explanations achieve a level of protection statistically comparable to expert-crafted messages, effectively automating a high-cost task. While Claude 3.5 Sonnet showed a trend towards reducing click-through rates compared to manual baselines, Llama 3.3, despite being perceived as clearer, did not yield the same behavioral benefits. Feature-based explanations were more effective for genuine phishing attempts, whereas counterfactual explanations diminished false-positive rates. Other variables, such as workload, gender, and prior familiarity with warning dialogues, significantly moderated the effectiveness of warnings. These results indicate that LLMs can be used to automatically build explanations for warning users against phishing, and that such solutions are scalable, adaptive, and consistent with human-centred values.
No comments yet.