H-Neurons: The Sparse Circuitry Behind LLM Hallucinations
A team of researchers from the Institute for Artificial Intelligence at Tsinghua published in December 2025 a finding that reframes how we understand hallucinations in large language models. The paper demonstrates that an exceptionally sparse subset of neurons reliably predicts when an LLM will hallucinate. Less than 0.1% of total neurons. In Mistral-7B, the range is 0.01‰ to 0.35‰. The implication is immediate: hallucinations are not uniformly distributed statistical noise — they are a localizable and intervenable phenomenon.
"A remarkably sparse subset of neurons — less than 0.1% of total — can reliably predict hallucination occurrences with strong cross-scenario generalization." H-Neurons are not a Mistral or Llama artifact: they appear in all evaluated transformer families, from 4B to 70B parameters.
What Are H-Neurons and How Are They Identified
The paper defines H-Neurons as neurons in transformer feedforward networks (FFN) whose activation systematically predicts the occurrence of hallucinations. Identification combines three stages: construction of a deterministic dataset, a normalized contribution metric (CETT), and sparse classification via L1 logistic regression.
H-Neurons concentrate in the upper-middle layers — precisely where prior mechanistic interpretability research identifies "knowledge retrieval" and "fact composition" circuits.
H-Neurons exhibit statistically different CETT distributions between correct and incorrect responses. This separability is the basis of their predictive power. Regular neurons show overlapping distributions — they cannot distinguish hallucinations.
Results: Universal Generalization Across Families and Scales
| Model | Parameters | H-Neurons (% total) | TriviaQA Accuracy | vs. random baseline |
|---|---|---|---|---|
| Mistral-7B-v0.3 | 7B | 0.01‰ – 0.35‰ | 78.4% | +16.7pp |
| Mistral-Small-3.1 | 24B | <0.1% | High | ~+10pp |
| Gemma-3-4B | 4B | <0.1% | Consistent | ~+10pp |
| Gemma-3-27B | 27B | <0.1% | Consistent | ~+10pp |
| Llama-3.1-8B | 8B | <0.1% | Consistent | ~+10pp |
| Llama-3.3-70B | 70B | <0.1% | Consistent | ~+10pp |
The consistency across Mistral, Gemma, and Llama — and across scales from 4B to 70B — is the paper's most robust result. H-Neurons are not an artifact of a specific model family: they are a universal emergent property of feedforward transformers. The paper also demonstrates cross-scenario generalization: H-Neurons identified on TriviaQA predict hallucinations in completely different domains — confirming they capture a general over-compliance mechanism, not a factual domain signal.
Cross-Scenario Generalization: Detailed AUROC
The paper's most provocative finding is that H-Neurons identified in one scenario (TriviaQA — factual hallucination) predict problematic behaviors in semantically disjoint scenarios. The following table shows cross-scenario transfer AUROC scores for Mistral-7B-v0.3:
| Source → Target | AUROC (H-Neurons) | AUROC (Random) | Δ | Significance |
|---|---|---|---|---|
| TriviaQA → TriviaQA | 0.784 | 0.617 | +0.167 | p < 0.001 |
| TriviaQA → FalseQA | 0.721 | 0.523 | +0.198 | p < 0.001 |
| TriviaQA → FaithEval | 0.693 | 0.510 | +0.183 | p < 0.001 |
| TriviaQA → Sycophancy | 0.667 | 0.498 | +0.169 | p < 0.001 |
| TriviaQA → Jailbreak | 0.651 | 0.505 | +0.146 | p < 0.01 |
Cross-scenario generalization is evidence that H-Neurons encode a general "over-compliance" mechanism — not a domain-specific factual signal. This connects hallucination, sycophancy, and jailbreak compliance as manifestations of the same underlying circuit. It is the first individual-neuron-level evidence that these phenomena share computational substrate.
Four Dimensions of Over-Compliance Induced by α-Scaling
The central experiment of the paper is direct intervention: scaling H-Neuron activations by a factor α ∈ [0, 3]. The result is unambiguous: amplifying H-Neurons (α > 1) systematically increases problematic behavior rates across four independent dimensions.
suppression
baseline
amplification
maximum
Invalid premises
When H-Neurons are amplified, the model increasingly accepts factually incorrect claims present in the prompt. H-Neuron activation predicts when the model will override its own knowledge to comply with the question's premise.
Misleading context
When context contradicts the model's knowledge, amplification increases the rate of misleading context adoption. High H-Neurons = higher probability the model will "believe" the context over its training.
Sycophantic tendency
With α > 1, the model tends to validate user-expressed preferences even when incorrect. The correlation with H-Neurons suggests sycophancy and factual hallucination share an underlying mechanism.
Harmful instructions
Amplification increases compliance rates against jailbreak attempts. H-Neurons appear to be the general "over-compliance" mechanism — of which factual hallucinations are one specific manifestation.
Smaller models exhibit steeper slopes: they are more sensitive to H-Neuron amplification. The relationship is approximately linear with R² > 0.94 across all models. The suppression zone (α < 1) consistently reduces over-compliance rates without degrading general model capabilities.
Pre-Training Origin: RLHF Does Not Eliminate the Mechanism
The AUROC transferability analysis is the most important piece of evidence: the authors take H-Neurons identified in instruction-tuned models and verify their predictive power in the corresponding base models (before RLHF). AUROC scores consistently exceed random baselines — proving H-Neurons are not created by fine-tuning: they were already there. Parameter analysis confirms: H-Neurons concentrate in the "high-normalized-rank region," indicating their values change minimally during RLHF and SFT. RLHF and Constitutional AI can suppress the expression of hallucinations — but leave the mechanism intact.
Three Production Intervention Vectors
Real-time detection
Monitor H-Neuron activations during inference. When they exceed the threshold, emit a low confidence score or block the response. Implementable today with access to model intermediate states — no retraining.
α-Scaling suppression
Apply α < 1 to identified H-Neuron activations during the forward pass. Reduces over-compliance rate without retraining. Preserves general model capability — only attenuates the hallucination circuit.
Localized regularization
Fine-tuning with specific regularization over H-Neurons: penalize high activations in over-compliance contexts. More efficient than full RLHF — works on the mechanism, not just the behavioral expression.
LLM governance stack
All three vectors are orthogonal: they can be combined. Real-time detection for alerts, α-scaling for immediate suppression, directed fine-tuning for permanent reduction. Defense-in-depth architecture against hallucinations.
H-Neurons concentrate in the low normalized-rank region: their weights barely change during RLHF. This demonstrates the hallucination mechanism is fixed during pre-training and is resilient to post-training alignment.
Production Implementation: Pseudo-Code and Architecture
For engineers looking to implement H-Neuron detection and suppression in production, the pipeline has three phases: offline identification, inference monitoring, and active suppression. The following pseudo-code shows the PyTorch implementation with forward pass hooks:
import torch
from sklearn.linear_model import LogisticRegression
import numpy as np
# ── Phase 1: Offline H-Neuron Identification ──
def compute_cett(model, dataloader, layer_indices):
"""Compute CETT scores for all FFN neurons."""
cett_all = [] # shape: (N_samples, N_layers, N_neurons)
labels = []
for batch in dataloader:
activations = {}
hooks = []
# Register hooks on FFN intermediate layers
for l_idx in layer_indices:
ffn = model.layers[l_idx].mlp
def hook_fn(module, input, output, l=l_idx):
activations[l] = output # (batch, seq, d_ff)
hooks.append(ffn.register_forward_hook(hook_fn))
with torch.no_grad():
logits = model(**batch['input'])
# Compute CETT per neuron per layer
for l_idx in layer_indices:
h = activations[l_idx][:, -1, :] # last token
W_out = model.layers[l_idx].mlp.down_proj.weight
projected = h.unsqueeze(-1) * W_out.T # (batch, d_ff, d)
norms = projected.norm(dim=-1) # (batch, d_ff)
cett = norms / norms.sum(dim=-1, keepdim=True)
cett_all.append(cett.cpu().numpy())
for h in hooks: h.remove()
return np.concatenate(cett_all), np.array(labels)
# L1 logistic regression → H-Neuron indices
def identify_h_neurons(cett_matrix, labels, C=0.01):
clf = LogisticRegression(
penalty='l1', C=C, solver='saga', max_iter=5000
)
clf.fit(cett_matrix, labels)
h_indices = np.where(clf.coef_[0] > 0)[0]
return h_indices # typically <0.1% of total neurons
# ── Phase 2: Runtime Monitoring ──
class HNeuronMonitor:
def __init__(self, model, h_indices, layer_idx, alpha=1.0):
self.h_indices = h_indices
self.alpha = alpha
self.scores = []
ffn = model.layers[layer_idx].mlp
ffn.register_forward_hook(self._hook)
def _hook(self, module, input, output):
h_act = output[:, -1, self.h_indices]
score = h_act.abs().mean().item()
self.scores.append(score)
# Phase 3: α-scaling suppression
if self.alpha != 1.0:
output[:, :, self.h_indices] *= self.alpha
return output
def confidence(self, threshold=0.5):
"""Anti-hallucination confidence: 1.0 = safe."""
return 1.0 - min(self.scores[-1] / threshold, 1.0)Comparison with Prior Hallucination Detection Approaches
H-Neurons is not the first attempt to detect or mitigate LLM hallucinations. But it radically differs from existing approaches in resolution, computational cost, and intervention capability:
| Approach | Analysis Level | Requires | Intervention Possible | Latency |
|---|---|---|---|---|
| CoT verification | Output text | Multiple forward passes | Post-hoc (detect) | ~3-5x |
| SelfCheckGPT | Output distribution | N samples (N≥5) | Post-hoc (consensus) | ~Nx |
| Token entropy | Logit distribution | 1 forward pass + logits | Post-hoc (threshold) | ~1.05x |
| ITI / SAPLMA | Internal representations | Probe training | Real-time detection | ~1.1x |
| H-Neurons | Individual FFN neuron | One-time offline ID | Detection + suppression + FT | <1.01x |
The fundamental advantage of H-Neurons: direct intervention. It not only detects — it suppresses. And unlike SelfCheckGPT, it does not require multiple forward passes. Detection occurs within a single pass with negligible additional latency (<5ms).
Open Questions and Limitations
Temporal stability
Do H-Neurons identified at time t remain predictive at t+Δt? The paper does not evaluate H-Neuron drift over evolving distributions. In production, this requires periodic re-evaluation of the H-Neuron set.
Mixture of Experts (MoE)
All evaluated models are dense transformers. Does the phenomenon hold in Mixtral, Switch Transformer, DeepSeek-V3? Expert routing may distribute the over-compliance mechanism differently across expert FFN blocks.
Multi-step hallucinations
The paper evaluates hallucinations in short responses (single entity). Complex multi-step reasoning hallucinations — incorrect intermediate premises that accumulate — may require circuit-level analysis beyond the individual neuron level.
Quantization
Does quantization (INT8, INT4, GPTQ, AWQ) preserve H-Neuron activations? If aggressive quantization modifies specific neuron activation distributions, predictive power could degrade. Critical for edge deployment.
H-Neurons encode the over-compliance mechanism. Higher activation means the model prioritizes instruction compliance over factual knowledge. Suppression (α < 1) moves the model toward the ideal zone. Amplification (α > 1) pushes it toward hallucinations, sycophancy, and jailbreak compliance.
Key Takeaways
- Less than 0.1% of an LLM's FFN neurons (0.01‰–0.35‰ in Mistral-7B) predict when the model will hallucinate, with robust cross-domain and cross-family generalization (Mistral / Gemma / Llama, 4B–70B). H-Neurons are a universal transformer property, not an architectural artifact.
- The CETT metric normalizes each neuron's relative influence on its layer's output direction — not absolute magnitude. This enables embedding-dimension-agnostic H-Neuron identification and cross-scale model comparison.
- Amplifying H-Neurons (α > 1) systematically increases over-compliance across four dimensions: invalid premises (FalseQA), misleading context (FaithEval), sycophancy, and jailbreak. Smaller models are more sensitive (slope ≈ 3.03 vs ≈ 2.40 for larger models).
- H-Neurons emerge in pre-training — AUROC scores in base models exceed random baselines. RLHF and Constitutional AI mitigate the behavioral expression of hallucination behavior but do not modify the underlying mechanism encoded in base weights.
- H-Neurons enable three orthogonal production intervention vectors: (1) real-time detection via activation monitoring (no weight modification, latency <5ms), (2) suppression via α-scaling at inference (no retraining), (3) directed fine-tuning with localized regularization (lower cost than full RLHF). All three can be combined into a defense-in-depth stack.
- The paper's most disruptive finding: H-Neurons connect factual hallucination, sycophancy, and jailbreak compliance as manifestations of the same over-compliance circuit. This suggests defense against hallucinations and defense against jailbreaks can be addressed with the same neuron-level mechanism.
