Vulnerability Database

327,921

Total vulnerabilities in the database

ml-dsa's UseHint function has off by two error when r0 equals zero

Summary

There's a bug in the use_hint function where it adds 1 instead of subtracting 1 when the decomposed low bits r0 equal exactly zero. FIPS 204 Algorithm 40 is pretty clear that r0 > 0 means strictly positive, but the current code treats zero as positive. This causes valid signatures to potentially fail verification when this edge case gets hit.

Details

The issue is in ml-dsa/src/hint.rs in the use_hint function. Here's what FIPS 204 Algorithm 40 says:

3: if h = 1 and r0 > 0 return (r1 + 1) mod m 4: if h = 1 and r0 <= 0 return (r1 − 1) mod m

Line 3 uses r0 > 0 (strictly greater than zero), and line 4 uses r0 <= 0 (less than or equal, which includes zero). So when r0 = 0, the spec says to subtract 1.

But the current implementation does this:

if h && r0.0 <= gamma2 { Elem::new((r1.0 + 1) % m) } else if h && r0.0 >= BaseField::Q - gamma2 { Elem::new((r1.0 + m - 1) % m) }

The problem is r0.0 <= gamma2 includes zero. When r0 = 0, this condition is true (since 0 <= gamma2), so it adds 1. But according to the spec, r0 = 0 should fall into the r0 <= 0 case and subtract 1 instead.

The result is +1 when it should be -1, which is an off by two error mod m.

PoC

Take MLDSA 44 where γ2 = 95,232 and m = 44.

If use_hint(true, 0) is called:

  • Decompose(0) gives (r1=0, r0=0)
  • The condition r0.0 <= gamma2 is 0 <= 95232 which is true
  • So it returns (0 + 1) % 44 = 1

But FIPS 204 says:

  • r0 > 0 is 0 > 0 which is false
  • r0 ≤ 0 is 0 ≤ 0 which is true
  • So it should return (0 - 1) mod 44 = 43

The function returns 1 when it should return 43.

This can happen in real signatures whenever any coefficient of the w' vector happens to be a multiple of 2γ2, which makes its decomposed r0 equal zero. It's not super common but it's definitely possible, and when it hits, verification will fail for a completely valid signature.

Impact

This is a FIPS 204 compliance bug that affects signature verification. When the edge case triggers, valid signatures get rejected. Since MLDSA is supposed to be used for high security post quantum cryptography, having verification randomly fail isn't great. It's also theoretically possible that the mismatch between what signing expects and what verification does could be exploited somehow, though that would need more looking into.

The fix is straightforward, just change the condition to explicitly check for positive values:

if h && r0.0 > 0 && r0.0 <= gamma2 { Elem::new((r1.0 + 1) % m) } else if h { Elem::new((r1.0 + m - 1) % m) }

No technical information available.

Frequently Asked Questions

A security vulnerability is a weakness in software, hardware, or configuration that can be exploited to compromise confidentiality, integrity, or availability. Many vulnerabilities are tracked as CVEs (Common Vulnerabilities and Exposures), which provide a standardized identifier so teams can coordinate patching, mitigation, and risk assessment across tools and vendors.

CVSS (Common Vulnerability Scoring System) estimates technical severity, but it doesn't automatically equal business risk. Prioritize using context like internet exposure, affected asset criticality, known exploitation (proof-of-concept or in-the-wild), and whether compensating controls exist. A "Medium" CVSS on an exposed, production system can be more urgent than a "Critical" on an isolated, non-production host.

A vulnerability is the underlying weakness. An exploit is the method or code used to take advantage of it. A zero-day is a vulnerability that is unknown to the vendor or has no publicly available fix when attackers begin using it. In practice, risk increases sharply when exploitation becomes reliable or widespread.

Recurring findings usually come from incomplete Asset Discovery, inconsistent patch management, inherited images, and configuration drift. In modern environments, you also need to watch the software supply chain: dependencies, containers, build pipelines, and third-party services can reintroduce the same weakness even after you patch a single host. Unknown or unmanaged assets (often called Shadow IT) are a common reason the same issues resurface.

Use a simple, repeatable triage model: focus first on externally exposed assets, high-value systems (identity, VPN, email, production), vulnerabilities with known exploits, and issues that enable remote code execution or privilege escalation. Then enforce patch SLAs and track progress using consistent metrics so remediation is steady, not reactive.

SynScan combines attack surface monitoring and continuous security auditing to keep your inventory current, flag high-impact vulnerabilities early, and help you turn raw findings into a practical remediation plan.