GateBreaker: Gate-Guided Attacks on Mixture-of-Expert LLMs

Lichao Wu, Sasha Behrouzi, Mohamadreza Rostami, Stjepan Picek, Ahmad-Reza Sadeghi

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

Abstract

Mixture-of-Experts (MoE) architectures have advanced the scaling of Large Language Models (LLMs) by activating only a sparse subset of parameters per input, enabling state-of-the-art performance with reduced computational cost. As these models are increasingly deployed in critical domains, understanding and strengthening their alignment mechanisms is essential to prevent harmful outputs. However, existing LLM safety research has focused almost exclusively on dense architectures, leaving the unique safety properties of MoEs largely unexamined. The modular, sparsely-activated design of MoEs suggests that safety mechanisms may operate differently than in dense models, raising questions about their robustness.

In this paper, we present GateBreaker, the first training-free, lightweight, and architecture-agnostic attack framework that compromises the safety alignment of modern MoE LLMs at inference time. GateBreaker operates in three stages: (i) gate-level profiling, which identifies safety experts disproportionately routed on harmful inputs, (ii) expert-level localization, which localizes the safety structure within safety experts, and (iii) targeted safety removal, which disables the identified safety structure to compromise the safety alignment. Our study shows that MoE safety concentrates within a small subset of neurons coordinated by sparse routing. Selective disabling of these neurons, approximately 3% of neurons in the targeted expert layers, significantly increases the averaged attack success rate (ASR) from 7.4% to 64.9% against the eight latest aligned MoE LLMs with limited utility degradation. These safety neurons transfer across models within the same family, raising ASR from 17.9% to 67.7% with one-shot transfer attack. Furthermore, GateBreaker generalizes to five MoE vision language models (VLMs) with 60.9% ASR on unsafe image inputs.
Original languageEnglish
Title of host publication35th USENIX Security Symposium (USENIX Security 26)
PublisherUSENIX Association
DOIs
Publication statusAccepted/In press - 25 Dec 2025
Event35th USENIX Security Symposium - Baltimore, MD, United States
Duration: 12 Aug 202614 Aug 2026
https://www.usenix.org/conference/usenixsecurity26

Publication series

NameUSENIX SECURITY SYMPOSIUM

Conference

Conference35th USENIX Security Symposium
Country/TerritoryUnited States
Period12/08/2614/08/26
Internet address

Fingerprint

Dive into the research topics of 'GateBreaker: Gate-Guided Attacks on Mixture-of-Expert LLMs'. Together they form a unique fingerprint.

Cite this