All papers (26330 results)
Weak-key cryptanalysis of Blink
This note describes a weak-key attack on the tweakable block cipher Blink, which was recently introduced at FSE 2026. Specifically, it is shown that two rounds of Blink admit several nonlinear invariants. To illustrate that these invariants indeed lead to attacks, we describe a partial key-recovery attack on Blink-64 with data and time complexity $2^{23}$, for a fraction of $2^{-96}$ weak keys or tweaks. There is a trade-off between the fraction of weak keys and the data complexity, e.g., with $2^{56}$ data the fraction of weak keys increases to $2^{-63}$. The attack is based on the same strategy as our attack on Midori-64 from Asiacrypt 2018.
Bad Benchmarks and a Fourier-Analytic Framework for Characterizing the (Un)Hideability of Combinational-Logic Circuits
Design-hiding (DH) schemes, such as logic locking, aim to protect circuit-design intellectual property (IP) in the integrated-circuit (IC) supply chain. While many practical DH schemes have been proposed over the past 15 years, nearly all have been broken by efficient attacks. Security and efficiency claims for these schemes have been based primarily on evaluations using benchmark circuits from legacy test-suites such as ISCAS’85 and MCNC. Recent work suggests that some circuits are fundamentally unhideable, as their functionality can be approximately learned using classical blackbox (BB) learning-theoretic (LT) algorithms. In this work, we ask: How prevalent are unhideable circuits in standard DH benchmarks? To answer this, we identify properties—such as sparse Fourier spectra—that make circuits unhideable. However, since BB Fourier-analytic algorithms are often slow and inaccurate for large-domain circuits, we shift to a whitebox (WB) setting. We develop new, efficient WB variants of Fourier-analytic algorithms that leverage WB access to a circuit and advances in model counting to efficiently evaluate whether the circuit has properties that make it unhideable. Upon applying these algorithms to standard DH benchmarks, we find that most circuits in the ISCAS'85 and MCNC test-suites are fundamentally unhideable, whereas newer benchmarks exhibit stronger resistance to Fourier-analytic algorithms and merit broader use in DH evaluation.
Locally Computable High Independence Hashing
We consider (almost) $k$-wise independent hash functions, whose evaluations on any $k$ inputs are (almost) uniformly random, for very large values of $k$. Such hash functions need to have a large key that grows linearly with $k$. However, it may be possible to evaluate them in sub-linear time by only reading a small subset of $t \ll k$ locations during each evaluation; we call such hash functions $t$-local. Local hash functions were previously studied in several works starting with Siegel (FOCS'89, SICOMP'04). For a hash function with $n$-bit input and output size, we get the following new results:
* There exist (non-constructively) perfectly $k$-wise independent $t$-local hash functions with key size $O(kn)$ and locality of $t = O(n)$ bits. An analogous prior result of Larsen et al. (ICALP '24) had a locality of $t=O(n)$ words consisting of $w= O(n)$ bits each, and hence a suboptimal $O(n^2)$ bits total. Furthermore, we show that such hash functions could be made explicit if we had explicit optimal constructions of unbalanced bipartite lossless expanders. Plugging in currently best known suboptimal explicit expanders yields correspondingly suboptimal hash functions.
* Perfectly $k$-wise independent local hash functions generically yield expanders with corresponding parameters. This is true even if the locations accessed by the hash function can be chosen adaptively and shows that progress on explicit hash functions inherently requires progress on explicit expanders.
* We initiate the study of $\epsilon$-almost $k$-wise independent hash functions, where any $k$ adaptive queries to the hash function are $\epsilon$-statistically indistinguishable from $k$ queries to a random function. We construct an explicit family of such hash functions with optimal key size $O(kn)$ bits, optimal locality $t = O(n)$ bits, and $\epsilon= 2^{-n}$, significantly improving over the best known parameters for explicit perfectly independent hashing.
* More generally, if we consider a word model with larger word size $w$, then we get an explicit, efficient construction of $\epsilon$-almost $k$-wise independent hash functions with key size $O(kn/w)$ words, locality $t = O(n/\sqrt{w})$ words, and statistical distance $\epsilon= 2^{-n}$, which we show to be nearly optimal. Such parameters go beyond what is possible for perfect independence.
We discuss applications to nearly optimal bounded-use information-theoretic cryptography.
Efficient Conflict-Free NTT Hardware Architecture with Single-Port RAMs: Applications to ML-DSA
We present a Number Theoretic Transform (NTT) hardware architecture based on the Prouhet-Thue-Morse (PTM) code, enabling NTT implementations relying only on single-port RAMs (SPRAMs), rather than using dual-port RAMs (DPRAMs) as usually done in the literature. We show that the PTM code supports a conflict-free, transactional, and streamlined pipeline across all NTT computation stages, as well as scalable parallelism through multiple butterfly units. Using this approach, we design single- and dual-butterfly NTT modules for ML-DSA that are compliant with reference software and can be packaged as a standalone AXI-Stream peripheral, allowing the forward and inverse NTT operations to be offloaded from software via DMA transfers. Experimental results show that the proposed PTM-based NTT designs achieve near one-cycle-per-butterfly and half-cycle-per-butterfly performance for the single- and dual-butterfly configurations, respectively. At the same time, it maintains FPGA resource utilization comparable to state-of-the-art compact NTT implementations relying on mixed SPRAM/DPRAM architectures or SPRAM-only designs requiring coefficient reordering.
AHAB: Asynchronous, High-throughput, Adaptively-secure, Batched Threshold Schnorr Signatures
We present AHAB, a suite of protocols for threshold Schnorr signatures in the asynchronous communication setting with guaranteed output delivery (robustness). We build on the AVSS and GoAVSS protocols of Shoup–Smart and Groth–Shoup, which allow t < n/3 static corruptions. First, we provide protocol enhancements and a full security proof in the adaptive corruption model with erasures. Second, we introduce a signature production pipeline with a player elimination framework that bounds the damage from actively misbehaving parties: if t* corrupt parties disrupt a presignature batch, the total communication overhead is at most O(t*) times the happy-path cost, and all t* parties are identified and eliminated, after which the system runs at the happy-path rate until further active misbehavior occurs. Third, we present a simplified protocol variant for t < n/4 adaptive corruptions that achieves worst-case linear communication complexity by eliminating the complaint mechanism entirely and using star-finding at the pipeline level to agree on which dealers and receivers to use. Fourth, we present a hiding variant of GoAVSS that yields an unbiased distributed key generation protocol, preventing an adaptive adversary from biasing the signing key. We also give new and more efficient star-finding algorithms, including an ILP-based optimization that should improve the yield of presignatures per batch in practice. For the main protocol (t < n/3), with t=16 and n=49, on a 1 Gbps network with commodity hardware, we estimate a throughput of 100K signatures per second; for the simplified protocol (t < n/4), with t=16 and n=65, the estimate is 160–250K signatures per second.
Breaking the One-Way Property of a SHA-3 Implementation via Fault Injection: Key Recovery Attacks on Post-Quantum Digital Signatures
This paper presents fault‑injection attacks on six candidates of the Round‑2 NIST post‑quantum digital signatures call: code-based schemes CROSS and LESS, multivariate schemes MAYO, and MPC-in-the-Head schemes Mirath, RYDE, and PERK. These schemes rely on SHA‑3‑based hash functions to securely embed secret-dependent values in the signature construction. We show that a single instruction skip fault targeting the Keccak-f permutation during the sponge squeezing phase can reveal these secret values and enable full key recovery. The attacks break the one‑way property of the affected SHA‑3 implementation, as the fault allows recovering the function's input from its output. We experimentally validate the attacks on the optimised pqm4 ARM Cortex-M4 CROSS implementation via instruction-skipping using voltage glitching, and present practical countermeasures.
CAGP: A Quantum Canary Address Generation Protocol
The advent of quantum computing poses a fundamental threat to classical cryptographic assumptions. While algorithms such as RSA and Elliptic-Curve Cryptography are secure against classical adversaries, they would be efficiently broken by a sufficiently powerful quantum adversary. Yet, despite rapid industrial and academic progress, the timeline for achieving a Cryptographically Relevant Quantum Computer (CRQC) remains uncertain and opaque. In this work, we propose a mechanism to monitor quantum capabilities through economic incentives. We introduce CAGP, a trustless distributed protocol for deploying a quantum canary trap. CAGP enables the creation of publicly auditable cryptographic challenges whose solutions would reveal the existence of quantum computers capable of breaking the Elliptic Curve Discrete Logarithm problem. The protocol is decentralized, secure, efficient, and verifiable, featuring adjustable difficulty and native Bitcoin compatibility. A proof-of-concept implementation demonstrates the feasibility of CAGP as a Bitcoin-based early-warning system for the emergence of quantum computational power.
Scaling of Memory and Bandwidth Requirements of Post-Quantum Signatures with Message Size
In this work we analyse the qualitative memory and bandwidth efficiency properties of the currently standardised post-quantum signatures as such and of their protocol integrations mainly in the X.509 context. The term “qualitative” in this respect refers to how memory and bandwidth requirements scale with the size of the signed message. Specifically, we address the question in how far the algorithms support online-computations, a.k.a streaming, with respect to the signed message in the signing and verification operations. Further, we review the possibilities for the pre-computation of a short message representative outside the cryptographic module responsible for the signing or verification operation of the different signature schemes. We also give a preview on the corresponding cryptographic API of the PKCS#11 standard which introduces numerous PQC signature algorithms in the upcoming version 3.2. We demonstrate that for specific realistic use cases, the qualitative memory and bandwidth efficiency of the PQC signature schemes in protocol use is widely varied and by tendency substantially degraded compared to the traditional signature schemes based on RSA and elliptic curves, which always allow for the pre-computation of a short message representative in the form of a hash value. Our results are relevant to PQC migrations of existing applications using traditional RSA or elliptic curve schemes.
On the properties of arithmetic crosscorrelation for sequences with coprime periods
Arithmetic correlation is a critical performance measure for pseudorandom sequences generated by feedback with carry shift registers (FCSRs), extending classical correlation by accounting for carry propagation. Chen et al. proved that for binary sequences with coprime periods, the arithmetic crosscorrelation is constant, and established bounds for Legendre sequences and $m$-sequences. In this paper, we further investigate the arithmetic crosscorrelation of sequences with coprime periods. We derive upper bounds for sequences constructed from the Legendre symbol, which generalize classical Legendre sequences, and for sequences generated by trace functions. In addition, we show that the constant property of arithmetic crosscorrelation extends to non-binary sequences with coprime periods.
On the Security of MPC-in-the-Head Signatures with Correlated GGM Trees
Recent MPC-in-the-Head techniques enable the construction of signature schemes with compact signature sizes from various hardness assumptions. These techniques rely on commitments based on GGM trees, which have been optimized to further reduce the signature size with the so-called one-tree or correlated tree optimizations. While the one-tree technique has no incidence on the security of the scheme, this is not obvious for the correlated tree technique, and a formal security analysis of this technique has been missing in the literature.
In this work, we fill this gap and provide the first formal security analysis of MPC-in-the-Head signature schemes based on correlated trees. We first exhibit a potential security flaw of this technique which rules out any hope for a security reduction to the underlying hardness assumption. In particular, we show that recovering the first $\lambda$ bits of the secret witness is sufficient to achieve a full key recovery (where $\lambda$ is the security level). The underlying assumption should hence be such that recovering these $\lambda$ bits is as hard as recovering the full witness. Some state-of-the-art schemes do not satisfy this condition, which prevents a direct application of the correlated tree technique.
We then provide a formal security proof for signature schemes based on the correlated tree technique under this degraded hardness assumption. Our proof comes in several variants, in the random oracle mode or in the ideal cipher model, depending on the specific correlated tree construction. We also introduce a tweak for the instantiation of the leaf seed expansion in the ideal cipher model, which allows us to achieve a tighter security reduction. Our result shows that MPC-in-the-Head signatures based on correlated trees can achieve strong security guarantees and provides the first formal security proof for the MQOM v2 signature scheme, the on-going candidate in the NIST post-quantum standardization process with the shortest signature size in the MPC-in-the-Head family.
Attacks on Sparse LWE and Sparse LPN with new Sample-Time tradeoffs
This paper extends the Kikuchi method to give algorithms for decisional $k$-sparse Learning With Errors (LWE) and $k$-sparse Learning Parity with Noise (LPN) problems for higher moduli $q$. We create a Kikuchi graph for a sparse LWE/LPN instance and use it to give two attacks for these problems. The first attack decides by computing the spectral norm of the adjacency matrix of the Kikuchi graph, which is a generalization of the attack for $q=2$ given by Wein et. al. (Journal of the ACM 2019). The second approach computes non-trivial closed walks of the graph, and then decides by computing a certain polynomial of edge labels in the walks. This is a generalization of the attack for $q=2$ given by Gupta et. al. (SODA 2026). Both the attacks yield new tradeoffs between sample complexity and time complexity of sparse LWE/LPN.
Haechi: Simple Commitment-based Keyless In-person Verifiable Elections
For decades, verifiable election systems have typically relied on encrypting ballots to maintain voter privacy. Encryption requires keys, and the management of these keys is usually one of the most cumbersome and error-prone components of the system. But in-person elections—where one or more devices are used to each collect many votes—can use cryptographic commitments rather than encryption and completely obviate the need for cryptographic keys, leading to solutions that are much simpler and more robust than the encryption-based approaches.
Currently deployed E2E-verifiable voting systems also produce large election records, which can sometimes become an obstacle to election verification, by increasing the cost of hosting, distributing, and verifying election data. Using modern techniques for compact ZK proofs, Haechi improves on past commitment-based and encryption-based solutions by drastically reducing the size of the election records, leading to improvements of over an order of magnitude compared to several real-world deployments.
Improving ML Attacks on LWE with Data Repetition and Stepwise Regression
The Learning with Errors (LWE) problem is a hard math problem in lattice-based cryptography. In the simplest case of binary secrets, it is the subset sum problem, with error. Effective ML attacks on LWE were demonstrated in the case of binary, ternary, and small secrets, succeeding on fairly sparse secrets. The ML attacks recover secrets with up to 3 active bits in the "cruel region" (Nolte et al. 2024) on samples pre-processed with BKZ. We show that using larger training sets and repeated examples enables recovery of denser secrets. Empirically, we observe a power-law relationship between model-based attempts to recover the secrets, dataset size, and repeated examples. We introduce a stepwise regression technique to recover the "cool bits" of the secret.
A Comparative Evaluation of DATA and Microwalk for Detecting Constant-Time Violations in Cryptographic Libraries
DATA [22] and Microwalk [23] are two advanced dynamic binary instrumentation (DBI) tools for detecting constant-time (CT) violations in software implementations.
This paper presents a comparative evaluation of these tools' findings using a common test setup and several cryptographic implementations that are included in the libraries LibTomCrypt, OpenSSL, and liboqs.
Our experiments yield reliable results for symmetric ciphers. For asymmetric cryptographic schemes, however, internal random numbers cause a high number of reported findings that also differ among the tools.
In order to make the tools' results more comparable our test setup is adapted to externally inject random numbers that are otherwise generated internally by the cryptographic libraries.
We discuss the differences of the tools' design and their impact on practical results of cryptographic implementations as well as their resource consumption in terms of memory and runtime.
Concrete Estimation of Correctness and IND-CPA-D Security for FHE via Rare Event Simulation
By construction, Fully Homomorphic Encryption schemes have probabilistic correctness due to their underlying cryptographic assumptions. The family of Learning With Errors (LWE) problems assumes that a random error term is added during encryption.
Statistically, this error grows as homomorphic computation proceeds. While predicting the noise evolution was initially only a correctness issue, recent works have shown a direct link with the security of FHE schemes in the IND-CPA-D model.
Here, we present a framework that provides practical guarantees that the probabilities extrapolated from theoretical models satisfy bounds as small as $2^{-128}$. We show how to obtain strong experimental guarantees that the usual Gaussian model for noise is conservative and that a refined model based on Irwin-Hall distribution is valid.
This is realized through an algorithm called importance splitting, which we adapt here to the cryptographic setting. We provide a detailed study in the context of TFHE bootstrapping and its variants.
We believe our framework can serve as a baseline to be extended to other schemes, thereby ensuring both correctness and security across all FHE schemes.
Post-Quantum Blockchains with Agility in Mind
Blockchains intend to provide long-term integrity guarantees through cryptographic primitives that may become vulnerable over time due to algorithmic advances or paradigm shifts such as quantum computation. While cryptographic agility, the ability to transition between algorithms without disrupting operation, is recognized as essential, existing blockchain systems lack comprehensive support for such transitions. We address this gap by designing an Ethereum virtual machine (EVM) compatible blockchain that introduces support for cryptographic agility from genesis.
We first propose a flexibility framework that characterizes how algorithm choice can be distributed across blockchain components. We then present two technical contributions aligned with this framework: (1) cryptographically agile transactions (CATX), a new transaction format that decouples body and signature to enable user-selected signature schemes; and (2) a consensus-layer key registration mechanism that allows validators to migrate between signature schemes as operational upgrades without hard forks. We exemplify the agility of our design with ECDSA, Falcon-512, and ML-DSA signatures by conducting experimental evaluations over 30,000 blocks and 11 million transactions, showing that the CATX format introduces no measurable overhead.
Can Adaptive Communication Graphs Lower the Bottleneck Complexity of (Secure) Multiparty Computation?
The bottleneck complexity of a (secure) multiparty computation protocol is one measure of its communication-efficiency. It captures how well the communication load is balanced, and is defined as the maximum communication complexity required by any one party within the protocol execution.
Prior works on this topic restricted attention to protocols with fixed communication graphs, i.e. whether or not a given party communicates to another only depends on the round number.
We demonstrate the power of adaptively choosing communication graphs by developing various bottleneck-efficient protocols, both with and without security. Done naÏvely, protocols with adaptive communication graphs can exploit unnatural tricks, such as "communicating with silence." To ensure our protocols are meaningful, we additionally stipulate that they should run correctly even in asynchronous networks (where we make no assumption on the adversarial message-delays other than being finite).
[Bottleneck complexity of arbitrary functions.] With fixed communication graphs, Boyle, Jain, Prabhakaran, and Yu (ICALP'18) established the existence of a function $f\colon\{0,1\}^n\to\{0,1\}$ requiring $\Omega(n)$-bit bottleneck complexity, which is matched by the trivial protocol of having all parties send their inputs to one party. By adaptively choosing communication graphs, we show that any function $f\colon\{0,1\}^n\to\{0,1\}$ can be computed (securely) with bottleneck $O(n/\log n)$ (which we prove is essentially optimal), even in \emph{asynchronous} networks.
[Bottleneck complexity of symmetric functions.] Prior works have
demonstrated that special classes of symmetric functions, such as additive [Eriguchi, Asiacrypt'23] or abelian [Keller, Orlandi, Paskin-Cherniavsky, Ravi, ITC'23] functions can be computed bottleneck-efficiently with fixed communication graphs. We both expand the class of symmetric functions achievable with low bottleneck complexity, as well as show how input-adaptive communication graphs can be leveraged to further reduce the bottleneck complexity of some of our protocols.
Refined Approx-SVP Rank Reduction Conditions and Adaptive Lattice Reduction for MSIS Security Estimation
The security of lattice-based cryptography relies critically on the concrete hardness of the approximate shortest vector problem (Approx-SVP). For cryptographic-sized instances, existing Approx-SVP rank reduction conditions may be overly aggressive, as they implicitly assume access to a large number of extremely short lattice vectors. In this work, we systematize and refine Approx-SVP rank reduction conditions from a feasibility perspective. We identify that, in the context of dimension-for-free (D4f) technique, the existence of a single sufficiently short vector is the essential requirement, and we derive two refined and compact rank reduction conditions accordingly. The first condition is based on geometric properties of lattice sieving, while the second incorporates a basis-quality-dependent probabilistic bound. These results are validated through extensive experiments on high-dimensional lattices, where the compact condition outperforms prior methods by up to a factor of $60$ in dimensions $850$ and $925$. To reliably realize these conditions in high dimensions, we present APBKZ, an adaptive Pump-based lattice reduction strategy that dynamically selects the blocksize and dimension-for-free parameters according to the evolving Gram-Schmidt profile. We further introduce HeadAPBKZ, a head-focused execution mode that restricts reduction to a critical prefix once the rank reduction condition is satisfied. Combining these advances, we develop an improved concrete security estimation framework for the MSIS problem. Applied to Dilithium, our analysis indicates that when integrating compact rank reduction behavior with the D4f technique, the estimated concrete security margin of Dilithium drops by 9.50-16.63 bits compared to the conservative Core-SVP baseline, offering more accurate security benchmarks for cryptographic standardization.
PD-Net: Learning Device-Invariant Representations for Heterogeneous Cross-Device Side-Channel Attacks
Heterogeneous cross-device side-channel attacks remain a critical yet underexplored challenge, as models trained on one device often fail to generalize across architectures. This paper presents PD-Net, a domain generalization framework that learns device-invariant features by disentangling algorithmic content from device-specific style and aligning feature distributions using prototypical and Maximum Mean Discrepancy (MMD) losses.
PD-Net is trained on nine heterogeneous source domains spanning ARM/AVR/FPGA and power/electromagnetic leakage modalities, including 32-bit ARM Cortex-M0/M1/M3/M4, 8-bit AVR ATmega (three series), and 128-bit Xilinx Virtex-5 FPGA, and evaluated in a zero-shot setting without target-specific adaptation.
Experimental results demonstrate robust zero-shot cross-architecture transfers between 8-bit and 32-bit devices, with consistent gains over existing generalization and transfer-learning approaches. In particular, PD-Net delivers 29 successful attacks with only 10 divergences across 70 settings, markedly outperforming the state of the art, which succeeds in only 4 cases and diverges 19 times.
To the best of our knowledge, this is the first domain generalization (DG)-based deep learning framework to systematically demonstrate practical zero-shot heterogeneous cross-device side-channel attacks.
Adaptively-Secure Proxy Re-Encryption with Tight Security
(Bi-Directional) Proxy Re-Encryption ($\mathsf{PRE}$) is a public-key encryption scheme that allows a proxy, holding a re-encryption key from $i$ to $j$, to transform a ciphertext intended for $i$ into one intended for $j$. $\mathsf{PRE}$ has numerous applications, including secure data sharing and cloud computing. However, most existing $\mathsf{PRE}$ schemes experience significant security degradation when adversaries are allowed to adaptively corrupt re-encryption or secret keys. Prior to this work, only a few $\mathsf{PRE}$ schemes achieved quasi-polynomial security loss in the adaptive setting, and even those were limited to restricted re-encryption strategies.
In this paper, we propose four distinct $\mathsf{PRE}$ schemes with tight security guarantees in the adaptive setting, based on the $\mathsf{MDDH}$ assumption:
- $\mathsf{PRE}_0$, $\mathsf{PRE}_1$: Single- and multi-challenge $\mathsf{aHRA}$-secure $\mathsf{PRE}$ schemes with tight security focusing on efficient constructions.
- $\mathsf{PRE}_2$, $\mathsf{PRE}_3$: Single- and multi-challenge $\mathsf{aCCA}$-secure $\mathsf{PRE}$ schemes with (almost) tight security focusing on $\mathsf{CCA}$-type security.
To achieve tightly $\mathsf{CCA}$-secure $\mathsf{PRE}$ schemes, we introduce a novel concept called tag-based language-malleable $\mathsf{NIZK}$ with special simulation soundness. This primitive provides simulation-sound $\mathsf{NIZK}$ while preserving a restricted form of malleability. We construct both one-time and unbounded versions of this primitive under the $\mathsf{MDDH}$(Matrix Decisional Diffie-Hellman) assumption.
CatCrypt: From Rust to Cryptographic Security in Lean
We describe the methodology and scope of CatCrypt, a library for machine-checked cryptographic security proofs in Lean. CatCrypt provides an end-to-end pipeline from Rust reference implementations to security proofs in the computational model in Lean.
The translation from Rust to Lean is done using the Hax tool.
CatCrypt covers 172 cryptographic protocols and constructions with machine-checked security theorems in the computational model.
Of these, 110 have the full Rust-to-Lean pipeline. All bounds have been systematically cross-referenced against their published sources (IETF RFCs, NIST standards, and academic papers). Some proofs were ported from SSProve (Rocq), EasyCrypt, ProVerif, CryptoVerif and Squirrel; most are independent formalisations with no prior machine-checked treatment. CatCrypt also includes a verified Lean implementation of a substantial part of the hax transpiler pipeline.
This work is an experiment of what can be done by a researcher working with GenAI. Until recently, the formalization of one protocol required months of expert effort. In contrast, the whole of CatCrypt was developed in a period of two months. Because it was developed with AI, we develop a new methodology to increase confidence that the specifications are correct. Moreover, we will continue to audit the code in the coming months to gain even more confidence in the specification of the results.
We hope this work will facilitate the adoption of formal methods in the development of security-critical software. This is especially urgent due to AI's increased hacking capabilities, the explosion of AI generated software and the ongoing post-quantum transition, which requires the development of new cryptographic protocols and their secure implementation.
Oblivious SpaceSaving: Heavy-Hitter Detection over Fully Homomorphic Encryption
Heavy-hitter detection is a fundamental primitive in stream analytics, with applications in network monitoring, telemetry, and large-scale data systems. In many practical deployments, this computation must be maintained continuously on remote infrastructure that offers higher availability and centralized operational control, even when the underlying streams contain sensitive identifiers or proprietary activity patterns. Existing privacy-preserving approaches either incur substantial statistical noise or rely on multi-server trust assumptions. Fully Homomorphic Encryption (FHE) offers an attractive alternative by enabling exact computation over encrypted data on a single untrusted server, but the high cost of encrypted comparisons has historically made stateful stream processing impractical.
We present Oblivious SpaceSaving, a privacy-preserving reformulation of the classical Space-Saving algorithm for fully encrypted execution. Our central idea is the Moving Floor abstraction, which exploits a monotonicity invariant in the summary state to replace repeated magnitude comparisons with equality-based selection against a tracked encrypted floor. We further combine this with parallel victim selection and a hierarchical asynchronous ingestion pipeline, yielding an end-to-end encrypted heavy-hitter architecture that preserves the deterministic accuracy guarantees of the original algorithm.
Our design reduces the cost of encrypted updates by up to $2.74\times$ over a naive oblivious baseline and sustains end-to-end encrypted ingestion throughputs of up to 4.30 items/s with sub-second amortized latency. These results show that, with the right algorithmic reformulation, classical streaming summaries can be made practically viable under fully encrypted execution, bringing privacy-preserving stream analytics significantly closer to deployment.
Confidential Transfers for Multi-Purpose Tokens on the XRP Ledger
We introduce Confidential Transfers for Multi-Purpose Tokens (Confidential MPTs) on the XRP Ledger, a cryptographic extension of the XLS-33 token standard that enables confidential balances and hidden transfer amounts while preserving public supply verifiability. The protocol replaces plaintext per-account balances with EC–ElGamal ciphertexts and employs non-interactive zero-knowledge proofs to enforce transfer correctness, balance sufficiency, and the invariant OutstandingAmount ≤ MaxAmount without requiring decryption by validators. Confidentiality is scoped to transaction amounts and account balances; sender and receiver identities remain public, preserving XRPL’s account-based execution model. Our design maintains full compatibility with existing MPT semantics: public and confidential balances coexist, issuance rules remain unchanged, and theissuer’s designated second account is treated identically to other holders. The protocol further supports issuer-controlled operations, including freeze and clawback, without weakening supply soundness. To accommodate regulatory and institutional requirements, Confidential MPTs provide cryptographic auditability through an on-chain selective-disclosure model based on multi-ciphertext balance representations and equality proofs, while remaining compatible with simpler issuer-mediated audit models. We present a complete protocol specification, a security analysis under standard discrete-logarithm assumptions, and an open-source reference implementation (mpt-crypto) that realizes the required cryptographic primitives. Experimental evaluation demonstrates that confidential transfers can be verified within XRPL validator performance constraints, with proof sizes and verification costs suitable for production deployment.
Cryptanalysis of the Lightweight Stream Cipher RRSC
This paper presents a security evaluation of the RRSC lightweight stream cipher in its 64-bit and 128-bit variants. The analysis examines the key update process, internal component interactions, and diffusion behavior during initialization, supported by an avalanche study. Based on these observations, several cryptanalytic scenarios are explored, including time-memory-data trade-off attacks, full key-recovery attacks in the known-plaintext setting, and partial key-recovery attacks targeting the linear feedback shift register and nonlinear feedback shift register components. It is shown that the effective key space is reduced from \(2^{128}\) to \(2^{96}\) for the 128-bit variant and from \(2^{64}\) to \(2^{48}\) for the 64-bit variant.
Hadal: Centralized Label DP Training without a Trusted Party
We explore distributed training in a setting where features are held by one party and labels are held by another. In this context, we focus on label Differential Privacy (DP), where the labels require privacy protection from the other party who learns the trained model. Previous approaches struggle to train accurate models in high-privacy settings (i.e. when $\epsilon \leq 1$), or typically require a trusted third party. To eliminate this trusted party while preserving model utility, we present PostScale, a novel Homomorphic Encryption (HE)-based protocol suited for high-privacy regimes with ciphertext multiplicative depth of two. Our protocol is suitable for a wide variety of models in the semi-honest setting and avoids leaking the model architecture as well as costly ciphertext operations like bootstrapping and rotations. We also present a multi-party sampling protocol for generating DP noise, and Hadal, a general-purpose dataflow-based framework for encrypted computation implementing our protocols. Hadal repurposes existing tools for use with HE, including comprehensive performance profiling capabilities, dual execution modes (eager and deferred), graph compiler-based optimization, and hyperparameter tuning. Our techniques achieve model utility similar to centralized DP while reducing communication by over 90% (from 1 TB to 8 GB per batch) and training time by 99% (from 54 minutes to 33 seconds) compared to related work that protects both features and labels. These improvements unlock larger models; we train Bert-tiny of Devlin et al. (2019), with 6.5 MB of parameters, in 20 ms per example in a LAN setting.
Proving modern code-based dual attacks with second-order techniques
In code-based cryptography, dual attacks for solving the decoding problem have recently been improved. They are now competitive and beat information set decoders for a significant regime. These recent dual attacks, starting from Carrier et al. (Asiacrypt 2022), work by reducing decoding to an LPN problem where the secret and the noise involve parts of the error vector coming from the decoding problem. However, currently, the analysis of all these dual attacks is heuristic. In the original Asiacrypt 2022 work, a simple LPN modeling was used to carry out the analysis but Meyer-Hilfiger and Tillich (TCC 2023) showed that this assumption could not be used. Consequently, they proposed an alternative analysis based on Fourier theory and on heuristically modeling the weight enumerator of a random linear code as a Poisson variable. The analysis of the newest and most efficient dual attack, doubleRLPN, introduced by Carrier et al. (Eurocrypt 2024) also relies on this technique and on this model.
Our main contribution is to devise a variant of doubleRLPN that we can fully prove without using any model. We show that our variant has the same performance, up to polynomial factors, as the original doubleRLPN algorithm. The final algorithm and its analysis are also simpler. Our technique involves flipping the coordinates of the noisy codeword and observing the fine changes in the amount of noise in the related LPN problem to reconstruct the entire error. The analysis is based on the second-order behavior of the bias of the noise which was already used in the original analysis.
Secondly, the performance of our algorithm, as it was the case for doubleRLPN, heavily depends on having access to a good code along with an efficient decoder. We instantiate this code by choosing a Cartesian product of a constant (instead of sublinear in the original proposal by Carrier et al.) number of random linear codes. We use a decoder based on blockwise error enumeration that was already used by Guo et al. (Asiacrypt 2014). We show that our approach is optimal up to polynomial (instead of superpolynomial) factors.
Triangulating Meet-in-the-Middle Attack
To penetrate more rounds with Meet-in-the-Middle (MitM) attack, the neutral words are usually subject to some linear constraints, e.g., Sasaki and Aoki's initial structure technique. At CRYPTO 2021, Dong et al. found the neutral words can be nonlinearly constrained. They introduced a table-based method to precompute and store the solution space of the neutral words, which led to a huge memory complexity. In this paper, we find some nonlinearly constrained neutral words can be solved efficiently by Khovratovich et al.'s triangulation algorithm (TA). Furthermore, motivated by the structured Gaussian elimination paradigm developed by LaMacchia et al. and Bender et al., we improve the TA to deal with the case when there are still many unprocessed equations, but no variable exists in only one equation (the original TA will terminate). Then, we introduce the new MitM attack based on our improved TA, called triangulating MitM attack.
As applications, the memory complexities of the single-plaintext key-recovery attacks on 4-/5-round AES-128 are significantly reduced from $2^{80}$ to the practical $2^{24}$ or from $2^{96}$ to $2^{40}$. Besides, a series of new one/two-plaintext attacks are proposed for reduced AES-192/-256 and Rijndael-EM, which are the basic primitives of NIST PQC candidate FAEST. A partial key-recovery experiment is conducted on 4-round AES-128 to verify the correctness of our technique. For AES-256-DM, the memory complexity of the 10-round preimage attack is reduced from $2^{56}$ to $2^{8}$, thus an experiment is also implemented. Without our technique, the impractical memories $2^{80}$ or $2^{56}$ of previous attacks in the precomputation phase will always prevent any kind of (partial) experimental simulations.
In the full version, we extend our techniques to sponge functions.
Efficiency Improvement of Deniable FHE: Tighter Deniability Analysis and TFHE-based Construction
Fully homomorphic encryption (FHE) is a cryptographic scheme that can take ciphertexts as inputs and compute a new ciphertext of a function of the underlying messages without decryption. FHE has been attracting attention along with the growing interest in privacy-preserving technologies. In terms of privacy-preserving technology, deniable encryption is also important. Deniable encryption enables a user, who may be forced to reveal the messages corresponding to the user's public ciphertexts, to lie about which messages the user encrypted. Agrawal et al. (CRYPTO 2021) introduced deniable FHE (DFHE) that combines FHE with deniable encryption, and proposed a transformation from an FHE scheme that satisfies specific special requirements, called special FHE, to a DFHE scheme. They also showed a construction of a special FHE scheme based on the BGV (Brakerski--Gentry--Vaikuntanathan) scheme. However, in the construction by Agrawal et al., one must store all the extensive randomness used for encryption in order to lie, and a bootstrapping operation, which takes a long time to execute, is a bottleneck in execution speed. In this paper, we show that by providing a tighter upper bound on deniability, we can reduce the size of the stored randomness and the required number of bootstrapping in the construction by Agrawal et al. In addition, we show that TFHE (Chillotti et al., J. Cryptol., 2020; Joye, CT-RSA 2024), which is known as a FHE scheme with fast bootstrapping, satisfies the requirements of special FHE, and thus can realize a faster DFHE scheme than the BGV-based construction.
Gryphes: Hybrid Proofs for Modular SNARKs with Applications to zkRollups
We address the challenge of constructing a proof system capable of handling multiple computations that involve diverse types of tasks, such as scalable zkRollup applications. A central dilemma in this design is the trade-off between generality and efficiency: while arithmetic circuit-based SNARKs offer fast proofs but limited flexibility, zkVMs provide general-purpose programmability at the cost of considerable overhead for circuit translation. We observe that typical workloads for such applications can be naturally divided into two parts: (1) diverse, task and data-dependent application logic, and (2) computationally intensive cryptographic operations, e.g., hashes, that are common and repetitive. To optimize for both efficiency and adaptability, we propose Gryphes, a hybrid framework that composes matrix lookup, a generalization of lookup arguments, together with SNARK solutions tailored for cryptographic operations. At the heart of Gryphes is a novel and efficient linking protocol, enabling seamless, efficient composition of matrix lookup + Plonk with general commit-and-prove SNARKs.
By integrating Gryphes with Groth16 for signatures and RSA accumulators for membership proofs, we build a zkRollup prototype that achieves efficient proving, constant-size proofs, and dynamic support for thousands of transaction types. This includes our matrix lookup implementation incorporated with Plonk, as well as practical optimizations, comprehensive benchmarks, and open-sourced code. Our results demonstrate that Gryphes strikes a very good balance between functionality and efficiency, offering highly expressive and practical zkRollup systems.
Registration-Optimized Dynamic Group Time-based One-time Passwords for Mobile Access
Mobile access within public finance and enterprise environments often requires lightweight anonymous authentication, allowing users to prove authorization without disclosing their identities. Group Time-based One-Time Passwords (GTOTP) has recently been proposed as a lightweight primitive meeting this need with post-quantum security. To address dynamic group membership, Cao et al. introduced DGTOne, the first dynamic GTOTP construction. It employs chameleon hashes to precompute a fixed set of Merkle-tree leaves (mount points), into which conventional TOTP verification points (VPs) contributed by group members are adaptively inserted. However, DGTOne partitions mount points by time epochs, so they can expire and become unusable, causing capacity waste due to unpredictable join times. Moreover, its outsourced proof generation requires verifiers to be online each epoch to fetch refreshed credentials from Registration Authority (RA), defeating offline verification needed in mobile access.
We address these limitations with two new schemes. First, we propose NWDGT, a no-wastage DGTOTP design that constructs Merkle trees of members' verification points (VP-trees) on demand, eliminating expired mount points at the cost of added handling latency. To mitigate this latency, we introduce LWDGT, which instantiates multiple small one-time signature (OTS) trees whose leaves (OTS public keys) serve as mount points. New members' VPs are signed immediately using unused leaves, achieving low wastage. We formally prove that the wastage rate of LWDGT is, with overwhelming probability, lower than that of DGTOne. By modeling the registration process and optimizing OTS-tree size, for deployments with up to 500 members (209 initially, 20 added monthly), LWDGT reduces mount point wastage rate by 10.2% over one year compared to DGTOne.
Efficient Compilers for Verifiable Dynamic Searchable Symmetric Encryption
We construct compilers to generically transform any dynamic Searchable Symmetric Encryption (DSSE) scheme that is secure against a semi-honest server into one that is secure against a malicious servers, thus yielding a Verifiable dynamic SSE (VDSSE). Our compilers achieve optimal overheads while preserving forward and backward privacy, which are the standard and widely accepted security notions for DSSE.
We focus on optimizing communication overheads and client storage requirements. Our first compiler $\mathsf{FLASH}$ incurs $O(1)$ communication overhead between the client and the server, which is optimal, while incurring mild storage overhead at the client. Our second compiler $\mathsf{BOLT}$ incurs $O(1)$ storage overhead at the client while incurring mild communication overhead.
Towards this, we define a new authenticated data structure called a set commitment and we provide an efficient instantiation of this primitive.
We prototype implement our compilers and report on their performance over real-world databases. Our experiments validate that our compilers incur concretely low overheads on top of existing semi-honest DSSE schemes, and yield practically efficient VDSSE schemes that scale to very large databases.
Three-Move Blind Signatures in Pairing-Free Groups
We propose the first blind signature scheme that simultaneously achieves the following properties:
- It uses a pairing-free group and random oracles in a black-box manner;
- It provably achieves concurrent security based on standard assumptions (DDH) without the algebraic group model (AGM);
- It requires only three moves.
Moreover, the public key, signature, and communication of our scheme all consist of only a constant number of group/field elements.
Prior to our work, black-box, three-move pairing-free schemes were only known in the AGM. A recent line of work proposed and optimized schemes without the AGM, but they all require at least four moves.
Performance Analysis of Parameterizable HQC Hardware Architecture
This work presents a constant-time hardware design for HQC (Hamming Quasi-Cyclic), a code-based key encapsulation mechanism selected for standardization by NIST's Post-Quantum Cryptography process. While existing hardware implementations of HQC have achieved limited performance due to area constraints, our work demonstrates that high performance can be attained with minimal hardware overhead using higher datawidth. We present a fully parameterizable, flexible data width, hardware design, configurable for both performance targets and security levels, implementing HQC key generation, encapsulation, and decapsulation in Verilog for FPGA deployment. The three operational modules share a common SHAKE256 hash core to minimize area overhead while maintaining throughput. Our design significantly outperforms existing HQC hardware implementations in terms of latency, while achieving a similar or smaller value of the area-time (AT) product compared to existing implementations. The improved performance results from the optimizations introduced in the sparse polynomial multiplier and fixed weight vector generator modules. We achieve upto 35% improvement in the AT product when compared to other most efficient unified HQC hardware designs in the literature. For our fastest configuration targeting HQC-1 (the L1 security level), key generation completes in 0.020 ms, encapsulation in 0.040 ms, and decapsulation in 0.081 ms when implemented on a Xilinx Artix 7 FPGA, showcasing a 40% improvement in latency when compared against the fastest design, while maintaining a competitive area footprint.
A Note on HCTR++
A recent Accordion mode has been proposed by Öztürk et al.: HCTR++ construction proposed in [OKY26, Cryptology ePrint Archive, Paper 2026/383]. I identify a fundamental correctness flaw in the design. Specifically, I demon- strate that the decryption algorithm (Algorithm 2) does not correctly invert the encryption algorithm (Algorithm 1), rendering the scheme undecryptable as specified.
The authors have acknowledged the use of AI to refine the conclusion section of their paper. I have discovered this vulnerability completely independently of any AI tools. However, as an exercise, I have provided the algorithm to both ChatGPT and Claude (free versions) in retrospect, to see if they can identify the flaw, and I report my comments/observations. I wish to emphasis that the authors have made no claims or acknowledgment of using AI tools beyond drafting and refining the introduction and conclusion sections, and I make no such claims either. The purpose of this note is point out the vulnerability (mistake) in the design, and to look into how free AI models approach finding it.
I would like to also point out that the authors have since updated their design, and this note only refers to the original version. I have not studied the updated design and make no claims about it.
Any comments made in this note are my own and do not reflect on the opinions of any affiliations or funding agencies.
On the Security of Constraint-Friendly Map-to-Curve Relations
Groth, Malvai, Miller and Zhang (Asiacrypt 2025) introduced constraint-friendly map-to-elliptic-curve-group relations that bypass the inner cryptographic hash when hashing to elliptic curve groups inside constraint systems, achieving substantial reductions in circuit size. Their security proof works in the Elliptic Curve Generic Group Model (EC-GGM).
We identify three gaps. First, the security bound is not explicitly analyzed, and the bounds stated for the concrete instantiations are loose. Second, the EC-GGM does not capture the algebraic structure of most deployed curves; we exhibit a concrete signature forgery using the parameters claimed secure. Third, the construction requires a congruence condition on the field that is not satisfied by all deployed curves; we extend it to any field.
As a countermeasure we propose a y-increment variant that neutralises the algebraic attack, removes the field restriction, and preserves a comparable constraint count. We implement and benchmark both constructions in the open-source gnark (Go) library; the attack is additionally demonstrated via a self-contained SageMath simulation and confirmed at the circuit level against the authors’ own Noir (Rust) implementation.
FROSTLASS: Flexible Ring-Oriented Schnorr-like Thresholdized Linkably Anonymous Signature Scheme
FROST is a pragmatic method of thresholdizing Schnorr signatures, permitting a threshold quorum of $t$ signers out of $n$ total individuals to sign for a message. This scheme improved on the state of the art, resulting in an efficient protocol that aborts in the presence of up to $t-1$ malicious users with strong resilience against chosen-message attacks, assuming the hardness of the discrete logarithm problem. In this work, we build upon the foundation introduced in FROST by presenting FROSTLASS, which additionally enjoys novel linkability criteria and anonymity guarantees under the general one-more discrete logarithm problem, utilizing a "Schnorr-shaped hole'' technique to prove desirable security results. This scheme is highly practical, tailor-made for use on-chain in the Monero cryptocurrency; indeed, we also showcase a Rust implementation for this protocol, demonstrating its real-world application to improve the security and usability of Monero.
Tailored Limb Counts, Faster Arithmetic: Improved TMVP Decompositions for Curve5453 and Curve6071
Curve5453 and Curve6071 are Montgomery curves over the primes $2^{545}-3$ and $2^{607}-1$, providing 271- and 302-bit classical security, respectively.
Their TMVP-based field multiplication in 10-limb representation costs 77 multiplications.
We reduce this to 60 for Curve5453 ($22\%$ fewer) using a 9-limb radix-$2^{61}$ representation, and to 54 for Curve6071 ($30\%$ fewer) using a 12-limb radix-$2^{51}$ representation with hierarchical block-level TMVP.
Choosing the limb count to produce $3 \times 3$ Toeplitz blocks aligns the structure with the size-3 TMVP formula, computing each block product in 6 multiplications rather than 9.
Portable C implementations benchmarked on ARM64 and x86-64 confirm speedups of up to $16\%$ in field multiplication and $13\%$ in scalar multiplication.
On ARM64, Curve5453 reaches $90.6\%$ of OpenSSL's assembly-optimized NIST P-521 ECDH throughput with 12 additional bits of classical security, and Curve6071 delivers 302-bit classical security at $80.8\%$ of P-521's throughput.
Speeding Up Sum-Check Proving (Extended Version)
The sum-check protocol is a foundational primitive in modern cryptographic proof systems, but its prover-side cost has emerged as a concrete bottleneck. This paper introduces three complementary techniques that significantly reduce sum-check proving time and memory, especially in the context of zero-knowledge virtual machines (zkVMs).
First, for applications involving products of many multilinear polynomials, we develop a new algorithm that significantly reduces the number of field multiplications required for proving. Second, we develop a "small-value sum-check prover" algorithm. This significantly speeds up the prover in the common setting where the polynomials being summed evaluate to 64 or 32-bit integers, or to elements of a small sub-field within a larger extension field. Even outside of the small-value setting, this algorithm yields a faster "streaming prover", by which we mean a small-space algorithm that applies whenever the terms being summed can be enumerated in small space (as arises, for example, in zkVM applications). Third, we nearly eliminate prover overhead in the ubiquitous case where one factor is an equality polynomial by exploiting its decomposable tensor structure.
We implement these techniques in Jolt, a state-of-the-art zkVM, and evaluate their performance. In Jolt, we observe over an order of magnitude runtime speedup and memory reduction on the Spartan sub-protocol, and $1.7\times$ to $2.2\times$ speedups for a key high-degree sum-check sub-protocol in the Shout batch-evaluation argument.
Bulletproofs*: Verifier-Efficient Arithmetic Circuit Proofs via Folding
We present Bulletproofs* (BP*, BulletproofsStar), a folding scheme for arithmetic circuit proofs under standard assumptions and without preprocessing, i.e., for the arithmetic circuit satisfiability language of Bulletproofs (S&P 18), following the recipe of ProtoStar (ePrint 2023/620). To this end, we first adapt the algebraic verifiers of the arithmetic circuit proof of Bulletproofs to the algebraic form required by ProtoStar, and prove that the modified protocol remains secure. Then, we design the Bulletproofs* folding scheme that is complete and knowledge-sound. Finally, we analyze the resulting verifier cost after the folding-to-IVC transformation. The result shows an asymptotic linear gain compared to repeated invocations of the monolithic Bulletproofs verifier.
Format-Preserving Compression-Tolerating Authenticated Encryption for Images
We study the problem of provably-secure format-preserving authenticated encryption scheme for images, where decryption is successful even when ciphertexts undergo compression. This novel primitive offers users more control and privacy when sharing and storing images on social media and other photo-centric, compressing platforms like Facebook and Google Photos. Since compression is usually lossy, we cannot expect the decrypted image to be identical to the original. But we want the decrypted image to be visually as close to the original image as possible.
There is a vast number of works on image encryption, mostly in the signal processing community, but they do not provide formal security analyses. We formally define security, covering the goals of image confidentiality and integrity. While we first treat the problem generically, we are particularly interested in the construction for the most common compression format, JPEG. We design a scheme for JPEG compression using the standard symmetric cryptographic tools and special pre- and post-processing. We formally assess the security guarantees provided by the construction, discuss how to select the parameters using empirical experiments, and study performance of our scheme in terms of computational efficiency and decryption quality. We also build a browser plug-in that helps users store and share photos privately.
Analyzing the WebRTC Ecosystem and Breaking Authentication in DTLS-SRTP
DTLS-SRTP was designed to secure real-time media communication and is found in prominent audio and video call platforms, including Zoom, Teams, and Google Meet. Notably, it is part of Web Real-Time Communication (Web-RTC), a web standard enabling real-time communication in the browser. To this end, WebRTC uses multiple technologies, including HTTP, TLS, SDP, ICE, STUN, TURN, UDP, TCP, DTLS, (S)RTP, (S)RTCP, and SCTP. This amalgamation of technologies results in an overly complex system that is very challenging to audit systematically and automatically. As a result, the security of deployments of this core modern communication technology remains largely unexplored.
In this work, we aim to close this gap by developing an automated MitM testing framework (DTLS-MitM-Scanner (DMS)) to test the DTLS channel of a DTLS-SRTP connection. We use our framework to study the current state of the ecosystem in a case study spanning 24 service providers across their browser and mobile applications. Our analysis puts special emphasis on the authentication mechanism in DTLS-SRTP, where we test for 19 potential vulnerabilities that could lead to authentication bypasses for both the client and server. We find that among the 33 tested media server implementations, 19 contained vulnerabilities allowing an attacker to break authentication at the DTLS layer. For 9 of the affected systems, which serve hundreds of millions of users, we could also demonstrate that they could be exploited by an attacker to retrieve media data, assuming only Man-in-the-Middle capabilities. We highlight the impact of these vulnerabilities by building a Proof-of-Concept exploit to listen to Webex video conference calls.
SoK: Updatable Public-Key Encryption
Updatable (public-key) encryption is a broad concept covering (public-key) encryption schemes whose keys can evolve over time to support secure key rotation and limit the impact of key compromise. The essential feature is that the encryption keys (and possibly also ciphertexts) can be updated from one epoch to the next via so called update tokens. This concept is useful in various applications, among them secure outsourced storage, secure messaging or low-latency forward-secret key-exchange protocols.
The term, however, is used with varying meanings across the literature. Some works define key-updatable schemes, where only the public and secret keys evolve. Others extend this idea by also allowing ciphertexts to be updated during key evolution. Variants further differ in how evolution is triggered: in some schemes, the receiver performs key updates locally, while in others, the sender initiates the evolution by embedding update information in ciphertexts. Beyond achieving forward secrecy, many formulations also aim for post-compromise security, ensuring that once a compromised key is updated, future ciphertexts regain confidentiality under the new key.
In this paper, we systematize this field with a focus on updatable public-key encryption schemes. Our aim is to first provide a taxonomy that sheds light into the currently fragmented terminology. It then compares their formal definition, syntaxes and formal security models found in the literature, clarifies their interrelations, and identifies common design patterns underlying current schemes. Beyond mapping the definitional landscape we provide a comparative analysis of existing instantiations, focusing on their properties and efficiency, and highlighting their main trade-offs. The paper concludes with open challenges outlining directions for advancing the field.
FrozenTRU: Cold Boot Attacks on NTRU-Based Hash-and-Sign Signatures
Cold boot attacks, first introduced by Halderman et al. (USENIX'08), are a class of attacks that aim at recovering cryptographic secrets stored in volatile memory after a computer is powered off, using the fact that DRAM modules retain their contents to a large extent for some time, especially at low temperatures. Cold boot attackers can recover the original contents of memory with some flipped bits, with bit flip probabilities of <10% for one-to-zero and much lower (<0.1%) for zero-to-one shown to be easily achievable. The cryptanalytic goal is then to recover full secret keys based on this noisy data. Successful key recoveries from cold boot attacks have been shown to be feasible for various symmetric and public-key schemes, including AES, RSA, and more recently some lattice-based encryption schemes with secret keys stored in the number-theoretic transform (NTT) domain.
In this paper, we investigate cold boot attacks against NTRU-based signature scheme Falcon and its ancestor, the signature scheme of Ducas–Lyubashevsky–Prest (DLP). Those schemes significantly differ from other schemes previously considered for cold boot attacks, since, in particular, the memory representation of secret signing keys mostly consists of floating point values. As a result, the various relations existing between key coefficients only hold up to floating point errors, which makes key recovery more complex. Nevertheless, at the typical bit flip probabilities achievable with cold boot attacks, we manage to fully recover Falcon and DLP keys with good probability across all parameters in simulations carried out in a simple bit flip model. Furthermore, we validate our techniques using concrete cold boot experiments againt Falcon on a Raspberry Pi single board computer.
Finally, we propose countermeasures with negligible computational cost that significantly reduce the memory footprint of signing keys for Falcon and DLP, and at the same time make cold boot attacks considerably harder.
vkproof: Succinct verification of indexed verifying keys using modular compilation and polynomial fingerprinting
We introduce vkproof, a preprocessing SNARG which enables verification of the Varuna verifying key (or that of any similar proof system based on Marlin [Chi+20]) for the R1CS compiled from a given higher-level program. It has constant proof size and affords linear verifier costs in the number of instructions of the program rather than the density of the compiled R1CS, which makes it especially appealing in contexts where complex-to-arithmetise functions (such as hashing) appear as program instructions frequently. This verifier succinctness is achieved through modular compilation of programs and the use of fingerprints to verify polynomial correctness. We augment the algebraic holographic proof (AHP) model of Marlin by allowing oracles to witness polynomials in the instance and queries to linear combinations of indexed polynomials, resulting in a primitive we refer to as extended algebraic holographic proofs (eAHP).
Exploiting noisy single-bit leakage in ML-DSA
ML-DSA implementations face a serious risk from partial leakage of the mask vector $\boldsymbol{y}$. Recent research has shown that this threat is practical. Even highly noisy, single-bit leakage accumulated over many signatures can suffice to recover the secret key. We carefully analyze the number of signatures with bit leakage required for successful key recovery using a stochastic model, rather than relying on a concrete attack method. On the practical side, we develop new attack methods capable of recovering the key using almost the minimal number of signatures required in theory. Our attacks work for bit-error probabilities as high as 0.49 and for leakage at every bit position of index at least four or five (depending on the ML-DSA parameter set), making them more widely applicable than prior attacks, which were not reported to succeed for bit positions below six. In the most favorable scenario, leakage at bit index four keeps our attack practical for leaked bits with an error probability of 0.499, and in the absence of noise reduces the signature requirement to below 1000.
PRIVADA: Private user-centric Data Aggregation
Privacy-preserving data aggregation has become a fundamental tool for large-scale analytics in AI-driven and cloud-based systems. While existing solutions provide the default privacy guarantee, i.e., input confidentiality, most assure a semi-honest adversary model and fail to simultaneously ensure user anonymity, selective disclosure, and result privacy in the multiple data customers environment. In this work, we introduce PRIVADA, a maliciously secure data aggregation solution that uses MPC in the SPDZ framework. Unlike prior data aggregation schemes using MPC with/without SPDZ, PRIVADA supports multiple data customers while preventing inference of user participation and resisting collusions in real-world data aggregation applications. Moreover, our work guarantees user privacy and result privacy, in addition to input privacy. PRIVADA outperforms the state-of-the-art solutions by providing security against participating parties, including malicious data owners, aggregators, and data customers. Our proof-of-concept implementation also supports the new privacy-preserving data aggregation by combining malicious security, being available for multiple data customers, and ensuring strong privacy guarantees in large-scale deployments. The aggregation operation on the aggregator side becomes simpler with PRIVADA, and experimental results show a 12–15 times speedup compared to the state-of-the-art. This confirms that malicious security and strong privacy guarantees can be achievable without sacrificing practicality.
How Much Verifier's Dilemma and Staking Pools Adversely Affect Decentralization of Ethereum PoS under Realistic Operational Costs? (Extended Version)
Some consensus protocols, including Proof-of-Work (PoW) and Proof-of-Stake (PoS) designs of Ethereum, contain incentive misalignment because the protocol cannot technically verify whether a block producer or validator has executed (or omitted) validation of transaction correctness before producing a block or issuing an attestation. The incentive to omit validation stems from the risk of losing a fraction of the reward due to a late attestation in PoS, or the risk of missing timely block production (and thus its inclusion) in PoW.
This problem is referred to as the Verifier’s Dilemma (VD), and it has been investigated in prior work in the context of PoW, as well as in hybrid PoW and PoS settings of Ethereum.
In this work, we focus on Ethereum PoS, and we investigate how rational, minimally compliant validators affect long-term network decentralization due to VD and operational costs. Using evolutionary game theory and the replicator equation, we model competition among three validator phenotypes: the honest strategy, the lazy strategy, and the join pool strategy. While the honest strategy, which performs validation, requires the operational cost of expensive hardware to run a full validator node, which is currently about 20% of rewards earned, the lazy strategy, which omits validation (based on VD), enables operation of a reduced validator node at five times lower expense, which is currently about 4% of rewards earned. Moreover, the join pool strategy enables amortization of operational costs among pool members and can incorporate the lazy strategy to further reduce costs.
We analyze the profits of these strategies co-occurring under varying late attestation rates and operational cost levels using our slot-level simulator. Our findings demonstrate that the lazy strategy consistently outperforms the honest strategy in earned profits. Our next experiments reveal that the join pool strategy, combined with a variant of the lazy strategy, forms an evolutionarily stable equilibrium that rapidly collapses the validator population into a single shared pool. These results suggest that Ethereum decentralization can erode through rational economic drift even in the absence of late attestations.
Two Decades of Identity-Based Identification Schemes- A Survey on Challenges and Advances
Identity-based Identification (IBI) schemes have gained significant popularity in the field of cryptography due to their superior efficiency and scalability. However, the increasing number of proposed IBI schemes in recent years has made it challenging to compare and evaluate them effectively. To address this issue, this survey presents a comprehensive literature review and analysis of IBI schemes that offer security under various hardness assumptions. Employing a rigorous survey methodology, we introduce the first general taxonomy of IBI schemes, allowing for a systematic classification and evaluation of these schemes based on their security assumptions. Furthermore, we assess the computational and communication costs associated with the deployment of IBI schemes, considering the various challenges and limitations involved.
For each class of schemes, we calculate and compare their security, efficiency, benefits, and drawbacks. Researchers and developers are actively involved in implementing and analysing the runtime of IBI, particularly in diverse applications such as mobile and IoT devices. We present implementations and provide essential insights for guiding future advancements in this dynamic field. The survey concludes by identifying current research gaps and proposing future directions for IBI schemes, providing researchers and practitioners with an in-depth understanding of the state-of-the-art in this rapidly evolving field.
Radical 3-isogenies for the ideal class group actions on $(2, \varepsilon)$-structures
Chenu and Smith introduced the notion of $(d,\varepsilon)$-structures, pairs consisting of an elliptic curve over $\mathbb{F}_{p^2}$ and an isogeny of degree $d$ from the curve to its Galois conjugate. They also defined an ideal class group action on a set of supersingular $(d,\varepsilon)$-structures, inherited from the action on oriented supersingular elliptic curves. As cryptographic applications of this action, they outlined extensions of the CSIDH key exchange and of the Delfs-Galbraith algorithm for the supersingular isogeny problem. In particular, their extension of the Delfs-Galbraith algorithm, called the generalized Delfs-Galbraith algorithm, is expected to be more efficient than the original one by a constant factor. Therefore, it is important to find efficient methods for evaluating the ideal class group action on $(d, \varepsilon)$-structures.
In this paper, we focus on the case $d=2$ and present explicit radical 3-isogenies for evaluating the action of the class of a prime ideal above 3. Our approach relies on two representations of $(2,\varepsilon)$-structures: (i) reductions of degree-2 $\mathbb{Q}$-curves and (ii) Montgomery curves. In particular, we show that any $(2,\varepsilon)$-structure can be represented as a pair of a curve coefficient (of a degree-2 $\mathbb{Q}$-curve or a Montgomery curve) and a single sign. From these representations, we derive radical 3-isogenies that efficiently implement the action of the class of a prime ideal above 3. As an application of our radical 3-isogenies, we give an explicit algorithm of the meet-in-the-middle method for finding an ideal class connecting two given $(2, \varepsilon)$-structures, which is a part of the generalized Delfs-Galbraith algorithm.
RoKoko: Lattice-based Succinct Arguments, a Committed Refinement
We present RoKoko, a new lattice-based succinct argument system that achieves a linear-time prover alongside polylogarithmic communication and verifier complexity. Asymptotically, our construction improves upon RoK and Roll (ASIACRYPT 2025), the first post-quantum SNARK with $\tilde{O}(\lambda)$ proof size, by a multiplicative factor of $\Theta(\log \lambda)$. Practically, our system yields proofs of roughly $200$KB, while outperforming the state-of-the-art polynomial commitment scheme Greyhound (CRYPTO 2024) with a $100\times$ faster verification time, similar prover time, and competitive proof size. Our framework natively supports (tensor-)structured relations, such as polynomial evaluation and sumcheck relations.
At a high level, our construction follows the recursive split-and-fold paradigm: the prover first splits the witness into $\rho$ sub-witnesses, sends the corresponding cross-terms, and then folds them into a single witness that is shorter by a factor of $\rho$ using verifier challenges. Prior works typically restrict $\rho = O(1)$ to preserve succinct verification and maintain the optimal $\tilde{O}(\lambda)$ proof size. We overcome this “constant barrier”, which enables larger $\rho$ and thereby reduces the proof size. To achieve this, we introduce the following technical contributions.
(i) Committed folding. Instead of sending $O(\rho)$ cross-terms in the clear, the prover commits to the messages and later proves that the committed vector satisfies the verification relations. This enables the use of a larger shrinking factor, thereby reducing the number of recursion rounds. While this strategy has been successfully used in LaBRADOR (CRYPTO 2023), additional care is required here to preserve succinct verification.
(ii) Recursive commitments. We generalise the double-commitment technique from LaBRADOR into a framework for recursive commitments, yielding further compression in commitment size. This results in concrete improvements in communication within each recursion round.
(iii) Sumcheck-driven structured recursion. We extend the sumcheck framework from SALSAA (ePrint 2025/2124) to prove substantially more complex constraints arising in our construction (and open for future extensions), including correctness of random projections, inner-product claims and well-formedness of recursive commitments. While expressing these constraints as sumcheck relations requires considerable technical effort, the resulting protocols compose seamlessly with the structured recursion, yielding both linear-time proving and succinct verification.
A Universal Blinder: One-round Blind Signatures from FHE
We construct compilers that convert any secure signature scheme into a single-round blind signature scheme. An important property of the construction is that the final blind signature has exactly the same format as the underlying signature scheme, making the blind signature scheme backwards compatible with the underlying scheme. Our compilers make use of (two-key) fully homomorphic encryption and zero-knowledge proofs to ensure unforgeability and blindness of the final scheme. We present three compilers where the main differences is which party does the bulk of the work: the client, the signer, or both. Along the way we introduce a new notion of verifiable FHE that we call committed verifiable FHE, where the verifier does not see the circuit in the clear.
Two-Party BBS+ Signature in Two Passes
The BBS+/BBS signature scheme is a key building block for anonymous credentials and privacy-preserving authentication and is currently being standardized and increasingly deployed in practice. To avoid the problem of single-point-of-failure, many threshold BBS+ protocols have been recently proposed for general $t$-out-of-$n$ settings. In practice, however, a $2$-out-of-$2$ policy between a server and a mobile device is sufficient to distribute trust while keeping the system lightweight. Yet, existing threshold designs still require at least three rounds/passes and multi-kilobyte communication in the two-party setting.
In this work, we focus on the two-party setting and show that one can achieve reduced interaction while maintaining low computational and communication overhead.
Specifically, we present a two-pass two-party BBS+ signing protocol that requires only 0.85KB of communication per signature, about 27% of the currently most bandwidth-efficient work (S&P'25) in the $2$-out-of-$2$ setting. It achieves competitive signing times (roughly 62ms for one party and 46ms for the other) and remains efficient even for large message vectors (e.g., $\ell = 500$), making it attractive for practical deployments. Overall, our protocol is only slower than the fastest OT-based design (S&P'23) but uses nearly two orders of magnitude less bandwidth. We provide a full simulation-based security proof in the standard real-ideal paradigm. As an extension, our protocol can be generalized to a $2$-out-of-$n$ threshold setting naturally.
Earpicks: Tightly Secure Two-Round Multi- and Threshold Signatures
Multi-signatures are a fundamental cryptographic primitive in distributed systems, enabling a set of parties to jointly produce a compact signature on a common message. Of particular interest are constructions instantiated over pairing-free cyclic groups with a two-round signing protocol, as such schemes offer improved efficiency and deployability in practice. Support for key aggregation is an additional highly desirable property, allowing multiple public keys to be combined into a single succinct aggregate public key against which aggregate signatures can be verified. To improve concrete security guarantees, several works have proposed constructions with tight security reductions. However, existing tightly secure constructions have significant limitations. Notably, T-Spoon by Bacho and Wagner (Crypto 2025) is currently the only pairing-free two-round multi-signature scheme that simultaneously achieves tight security and supports key aggregation. Despite these advantages, T-Spoon incurs substantial efficiency overhead: its signatures comprise nine field elements and two group elements, resulting in prohibitively large signature sizes for many practical applications.
In this work, we introduce Earpick-MS, a tightly secure two-round multi-signature scheme over pairing-free cyclic groups that supports key aggregation while achieving compact signatures. Concretely, signatures in Earpick-MS consist of only three field elements and a single bit, thereby reducing the signature size by a factor of approximately 3.5 compared to the state-of-the-art T-Spoon construction. We further present Earpick-TS, a threshold signature variant of our scheme. Earpick-TS retains the same compact signature size and constitutes the first pairing-free two-round threshold signature scheme with a tight security proof. Prior to our work, achieving tight security in pairing-free threshold signatures required at least three rounds of interaction (Chen, PKC 2025; Bacho and Wagner, CiC 2026). Finally, we propose Earpick-muMS, an additional variant that achieves tight security in the multi-user setting while retaining the same compact signature size.
Playing Tag with Okamoto-Schnorr: Three-Move Pairing-Free Blind Signatures from DDH
This paper presents the first blind signature scheme in a pairing-free group with the following properties: (1) the signing protocol consists of only three moves; (2) the proof of one-more unforgeability relies solely on the Decisional Diffie-Hellman (DDH) assumption in the Random Oracle Model (ROM); and (3) the construction makes only black-box use of the underlying group. This resolves a major open problem in the area, as all prior pairing-free blind signatures either additionally relied on the Algebraic Group Model (AGM) or required at least four moves. Moreover, a recent lower bound by Dietz et al. (ePrint, '26) shows that three moves are optimal for such constructions.
Both the communication complexity and the signature size in our scheme consist of a constant number of group elements. Our construction in fact achieves strong one-more unforgeability (which was not known for any of the recent AGM-free constructions requiring four moves), and we also present a partially blind variant. Furthermore, blindness is statistical (in the ROM). Our approach is based on a new construction paradigm that combines a conventional (yet, by itself, not fully secure) blind signature scheme (specifically, the blind Okamoto-Schnorr scheme) with a carefully crafted algebraic MAC.
iToken: One-Time-Use Anonymous Token with Issuance Hiding
Privacy-Enhancing Know Your Customer (KYC) integrates one-time-use anonymous tokens (OTATs) into self-sovereign identity frameworks, such as the EU Digital Identity (EUDI) Wallet, Apple’s Private Access Tokens, and W3C’s Privacy-Preserving Advertising proposals (e.g., Private State Tokens), to enable regulatory compliance while preserving user anonymity. To mitigate targeted denial-of-service (DoS) attacks and prevent token misuse (e.g., farming and replay), this paper designs a new OTAT, iToken, that first achieves issuer hiding not only at verification but also throughout issuance, thereby strengthening both OTAT’s resilience and user privacy. We introduce a new primitive, a canonical blind ring signature (BRS), that adopts a blind-and-ring pattern, ensuring the ring structure is present from the outset and is initiated by the signer within the interactive blind signing protocol. We also provide two generic constructions, one from a linear function (LF) and homomorphic encryption, and another from an LF and a commit-and-prove sum argument. We finally prototype BRS and iToken, achieving efficient signing bandwidth and competitive computational performance.
Hybrid KEM Constructions from Classical PKEs and Post-Quantum KEMs
The rapid progress of quantum computing threatens widely deployed
public-key cryptosystems such as RSA and Diffie–Hellman, accelerating
the transition toward post-quantum cryptography (PQC).
During this migration, hybrid key encapsulation mechanisms (KEMs)
that combine classical and post-quantum primitives are strongly
recommended by standardization bodies and cybersecurity agencies.
However, existing hybrid designs mainly focus on combining
post-quantum KEMs with Diffie–Hellman–style constructions, while the
systematic integration of standardized classical public-key encryption
(PKE) schemes with post-quantum KEMs remains largely unexplored.
In this work, we introduce two generic hybrid constructions,
$\mathsf{HybKEM}$ and $\mathsf{HybKEM}^{*}$, that combine a classical
PKE scheme with a post-quantum KEM satisfying ciphertext
second-preimage resistance (C2PRI).
We prove that both constructions achieve IND-CCA security in the
standard model. The refined construction $\mathsf{HybKEM}^{*}$ additionally relies on a
new security notion of the classical PKE scheme, termed
partial ciphertext second-preimage resistance (PC2PRI), which captures
second-preimage resistance when a designated ciphertext component is
fixed.
This new property enables shared-key derivation from only a designated PKE ciphertext component in $\mathsf{HybKEM}^{*}$, leading to improved efficiency.
Finally, we provide a systematic analysis of the PC2PRI property for several standardized classical encryption schemes, including ECIES, PSEC, and SM2.
Low-Depth Construction of Grover Oracles from Fully Functional Quantum Circuits
The Grover oracle is the core component of the Grover search algorithm. Instead of constructing a Grover oracle from scratch, we consider the common practice of constructing a Grover oracle from an existing fully functional quantum circuit (FFQC). An FFQC typically performs computations for a primary target and includes ancilla restoration for qubits used as intermediate storage. Although such circuits can be directly integrated into an oracle, we find that this inevitably introduces circuit redundancy. To address this, we propose a low-depth transformation method that converts an existing FFQC into a low-depth Grover oracle. Additionally, our method can further reduce the width while retaining the previously achieved low depth. We analyse an implementation of the AES quantum circuit and further reduce the circuit width from 7280 to 7104.
Accurate Parameter Estimates for Punctured Key Recovery Linear Attacks
At EUROCRYPT 2024, Flórez-Gutiérrez and Todo introduced the puncturing technique for linear key recovery attacks. Puncturing works by modifying the map which evaluates the linear approximation as a function of the plaintext, ciphertext and key by setting carefully chosen coordinates of its Fourier transform to zero. These modifications are intended to reduce the time complexity of the attack at the cost of an increase in data complexity. In this note, we revisit the model which is used to estimate the data complexity, clarify some of its underlying assumptions, and improve its accuracy. This leads to a revision of the cost estimates for several applications of puncturing in the literature, most notably for attacks whose data complexity is close to the full codebook.
Secret-Shared Shuffle from Authenticated Correlations
Shuffle is a basic primitive for secure computation. Secret-shared shuffle refers to oblivious permutation over secret-shared data, which has broad applications in secret-sharing-based secure computation. Since shuffle is typically used in highly sensitive applications, malicious security is often necessary to provide realistic security guarantees. This paper proposes a new family of two-party maliciously secure secret-shared shuffle protocols with linear communication/computation cost and constant-round communication. Achieving this goal has been proven non-trivial by several recent attempts. We answer this question by proposing a new and simple shuffle paradigm based on authenticated correlations. We start by proposing a simple and efficient protocol template based on authenticated correlations with linear cost and constant-round communication. The protocol can be enhanced to be fully authenticated against a malicious sender, which avoids selective-failure attacks that incur the main overhead in existing solutions. However, our roadmap introduces a consistency issue from a malicious receiver, and the challenge is how to resolve the issue while preserving the expected efficiency property. To this end, we propose new efficiency-preserving consistency checks, enabled by a set of new techniques, optimizations, and analyses. Combining the consistency checks with our framework based on authenticated correlations, we propose two maliciously secure secret-shared shuffle protocols with linear cost and constant-round communication. We have implemented our protocols. Performance evaluation shows that our protocols are faster with lower communication than the state-of-the-art.
Zeeperio: Verifying Governmental Elections with Ethereum
Scantegrity II became the first governmental election run with a cryptographic end-to-end election verification (E2E-V) protocol.
E2E-V protocols allow the public to verify proofs that the election was executed correctly, but participation in this important process is largely left as an opt-in, ad hoc exercise. We present Zeeperio, a special purpose zk-SNARK argument (built with application-specific arithmetization) that can issue proofs for Scantegrity elections that can be verified automatically via smart contracts for inexpensive on-chain verification. A Zeeperio verification contract running on Ethereum costs under $30 USD (at time of writing) per election (and the cost is constant in the number of ballots). By not relying on general purpose zk-SNARK toolkits, like circuit or zkVM compilers, Zeeperio's tailor-made argument offers multiple order-of-magnitude improvements to prover efficiency over implementations from the research literature. For example, Zeeperio requires under 5 hours on a commodity laptop for an election with 100,000 ballots to produce a proof in the kilobyte range.
TAPAS: Efficient Two-Server Asymmetric Private Aggregation Beyond Prio(+)
Privacy‑preserving aggregation is a cornerstone for AI systems that learn from distributed data without exposing individual records, especially in federated learning and telemetry. Existing two‑server protocols (e.g., Prio and successors) set a practical baseline by validating inputs while preventing any single party from learning users’ values, but they impose symmetric costs on both servers and communication that scales with the per‑client input dimension $L$. Modern learning tasks routinely involve dimensionalities $L$ in the tens to hundreds of millions of model parameters.
We present TAPAS, a two‑server asymmetric private aggregation scheme that addresses these limitations along four dimensions: (i) no trusted setup or preprocessing, (ii) server‑side communication that is independent of $L$ (iii) post‑quantum security based solely on standard lattice assumptions (LWE, SIS), and (iv) stronger robustness with identifiable abort and full malicious security for the servers. A key design choice is intentional asymmetry: one server bears the $O(L)$ aggregation and verification work, while the other operates as a lightweight facilitator with computation independent of $L$. This reduces total cost, enables the secondary server to run on commodity hardware, and strengthens the non‑collusion assumption of the servers. One of our main contributions is a suite of new and efficient lattice-based zero-knowledge proofs; to our knowledge, we are the first to establish privacy and correctness with identifiable abort in the two-server setting.
Optimizing FROST for Message Capacity
The FROST threshold signature scheme achieves round optimal Schnorr signing through a double-nonce construction, but requires two presignatures per signature. Since each presignature demands an expensive distributed key generation (DKG) protocol, this overhead is significant for high-throughput applications. FROST builds on a core presignature protocol (that we call FROST-core) that uses hash-based re-randomization of presignatures. We investigate whether fewer presignatures can be used to sign multiple messages, improving FROST-core's message capacity.
We first show that the natural generalization of using $k$ presignatures for $k$ messages is insecure: an extended ROS attack enables forgery even for $k=2$. However, we prove that using $k+1$ presignatures for $k$ messages achieves security in the Generic Group Model combined with the Random Oracle Model. This improves message capacity from 50% (standard FROST-core) to $\frac{k}{k+1}$, approaching 100% as $k$ grows.
We further extend our analysis to a modified FROST-core protocol in which a set of presignatures is generated by different parties and used for signing $k$ messages. Security holds as long as at least $k+1$ presignatures were created by honest parties.
New Approaches to Zero-Knowledge SNARG Constructions
Zero-Knowledge Succinct Non-Interactive Arguments (zkSNARGs) are SNARGs in which the proof reveals nothing except the validity of the claim. zkSNARGs for NP can be constructed generically from SNARGs for NP using a Non-interactive Zero-knowledge (NIZK) proof, but this transformation uses either the NIZK or the SNARG in a non-black-box way.
We design a new SNARGs-to-zkSNARGs transformation that is conceptually different from the NIZK+SNARG approach. Our transformation is inspired by the elegant construction of succinct interactive ZK arguments of Ishai, Mahmoody and Sahai (TCC`12) which combines a cryptographic hash function with an information theoretic ZK proof system (specifically, a Probabilistically-Checkable Proof).
Our construction takes a first step towards fully black-box (BB) zkSNARG constructions: it uses the underlying SNARG as a black-box (though the other cryptographic components are used in a non-BB way). Our transformation is applicable to SNARGs for sub-classes of NP: batched NP computations (i.e., BARGs); and languages that have computational non-signaling PCPs, which contains NTISP (non-deterministic bounded space).
As a corollary, we get zkBARGs for NP, and zkSNARGs for NTISP, from the Learning With Errors (LWE) assumption. Thus, our results give a scaled-down version of the zkSNARGs-to-SNARGs reduction for NP, showing that by restricting to a sub-class of NP, the zkSNARG construction can be based on standard assumptions.
A main ingredient underlying our transformation is a new commitment primitive, called Hiding Somewhere-Extractable commitment (HSE), which we introduce and construct based on LWE. This commitment primitive enhances somewhere statistically-binding hash functions (Hubáček and Wichs, ITCS 2015) to also guarantee hiding, and could be of independent interest.
SynCirc: Efficient Synthesis of Depth-Optimized Circuits from High-Level Languages (Extended Version)
Secure Multi-Party Computation (MPC) enables secure computation on private data. Many of today's efficient MPC protocols need a representation of the evaluated function as circuit composed of Boolean or Lookup Tables (LUTs). To improve the practicality of MPC, we present SynCirc, a hardware synthesis framework optimized for MPC applications. Built on Verilog and the open-source tool Yosys-ABC, SynCirc introduces custom libraries and constraints for multi-input AND gates, achieving up to $3\times$ reduction in multiplicative depth and online rounds compared to TinyGMW (Demmler et al., CCS'15).
SynCirc also offers an expanded library of efficient building blocks like comparison, multiplexers, equality checks and incorporates Boolean and LUT circuits. For these building blocks, we achieve improvements in multiplicative depth/online rounds between $22.3\%$ and $66.7\%$ over ShallowCC (Büscher et al., ESORICS'16). Our evaluation using the FLUTE framework (Brüggemann et al., IEEE S&P’23) shows that SynCirc has $116\times$ less online communication than the multi-input AND gate protocol of Trifecta (Faraji and Kerschbaum, PETS'23).
SynCirc introduces novel capabilities, including enhanced support for High-Level Synthesis (HLS) with the XLS tool, enabling developers to create secure functions in C/C++ without the need for expertise in hardware definition languages like Verilog. SynCirc is an open-source toolchain that democratizes secure computation, simplifies circuit synthesis and makes advanced privacy-preserving technologies more accessible.
High-Order Galois Automorphisms for TNFS Linear Algebra
The Number Field Sieve algorithm and its variants are the best known algorithms to solve the discrete logarithm problem in finite fields. When the extension degree is composite, the Tower variant TNFS is the most efficient. Looking at finite fields with composite extension degrees such as $6$ and $12$ is motivated by pairing-based cryptography that does not yet have a good quantum-resistant equivalent.
The two most costly steps in TNFS are the relation collection} and linear algebra steps. Although the use of order $k$ Galois automorphisms allows one to accelerate the relation collection step by a factor of $k$, their use to accelerate the linear algebra step remains an open problem. In previous work, this problem is solved for $k=2$, leveraging a quadratic acceleration factor equal to $4$.
In this article, we bring a solution both for $k=6$ and $k=12$. We propose a new construction that allows the use of an order $6$ (resp. $12$) Galois automorphism in any finite field $\mathbb{F}_{p^6}$ (resp. $\mathbb{F}_{p^{12}}$), thus accelerating the linear algebra step with approximately a factor of $36$ (resp. $144$). Moreover, we provide a SageMath implementation of TNFS and our construction, and validate our findings on small examples.
PrivaDE: Privacy-preserving Data Evaluation for Blockchain-based Data Marketplaces
Evaluating the usefulness of data before purchase is essential when obtaining data for high-quality machine learning models, yet both model builders and data providers are often unwilling to reveal their proprietary assets.
We present PrivaDE, a privacy-preserving protocol that allows a model owner and a data owner to jointly compute a utility score for a candidate dataset without fully exposing model parameters, raw features, or labels. PrivaDE provides strong security against malicious behavior and can be integrated into blockchain-based marketplaces, where smart contracts enforce fair execution and payment. To make the protocol practical, we propose optimizations to enable efficient secure model inference, and a model-agnostic scoring method that uses only a small, representative subset of the data while still reflecting its impact on downstream training. Evaluation shows that PrivaDE performs data evaluation effectively, achieving online runtimes within 15 minutes even for models with millions of parameters.
Our work lays the foundation for fair and automated data marketplaces in decentralized machine learning ecosystems.
Cryptanalysis of four arbitrated quantum signature schemes
Arbitrated quantum signature (AQS) schemes aim at ensuring the authenticity of a message with the help of an arbitrator. Moreover, they aim at preventing repudiation, both from a sender that denies the origin of a message, and from a receiver who disavows its reception. Such protocols use quantum communication and are often designed to protect quantum messages. In this paper, we study four recently submitted AQS schemes and propose attacks on their security.
Firstly, we look at Zhang, Sun, Zhang and Jia's AQS scheme which aims at signing quantum messages with chained CNOT encryption. We show that the sender can repudiate her messages and make false allegation of reception. Moreover, we show that a dishonest receiver can forge signatures.
Secondly, we analyse Ding, Xin, Yang and Sang's AQS protocol to sign classical messages based on GHZ states. We show that both the sender and the receiver have simple repudiation strategies.
Thirdly, we study Lu, Li, Yu and Han's AQS scheme that uses controlled teleportation to protect quantum messages. We expose forgeries, false allegation attacks and the possibility of repudiation by both parties.
Fourthly, we focus on the AQS scheme by Zhang, Xin, Sun, Li and Li designed to sign classical messages without entangled states. We show that one can disavow the reception of messages, and that information-theoretic security is not achieved for other security goals.
On Post-Quantum Signature with Message Recovery from Hash-and-Sign in QROM
Post-quantum signatures have been suffering from their inefficiency compared to traditional signatures. To reduce space consumption, a promising approach is to design signature schemes with message recovery, which can embed part of the message into the signature without increasing its length. A notable example is the PSS-R signature scheme (probabilistic signature scheme with recovery) proposed by Bellare and Rogaway (EUROCRYPT 1996) in the classical ROM (random oracle model). However, there were few works considering post-quantum signature schemes with message recovery.
In this work, we study the design of post-quantum signature scheme with message recovery in the QROM (quantum ROM). Specifically, we adapt the classic PSS-R signature scheme to the post-quantum setting and prove its security in the QROM. Our security proof is conceptually similar to the original proof of PSS-R, but faces challenges in QROM when we need to reprogram two random oracles with correlated inputs/outputs. To address these issues, we extend the tight adaptive reprogramming theorem (Grilo et al., ASIACRYPT 2021) and the measure-and-reprogram theorem (Don et al., CRYPTO 2020) to our setting, to support reprogramming two oracles successively, where the input to one oracle depends on the other oracle's output. With these extended proof techniques, we provide the first security proof of a PSS-R-like signature scheme with message recovery in the QROM.
TP-NTT: Batch NTT Hardware with Application to Relinearization
Fully Homomorphic Encryption (FHE) enables arbitrary computation on encrypted data without decryption, providing strong privacy guarantees for secure cloud computing, encrypted analytics, and privacy-preserving machine learning. However, practical deployment of FHE remains limited by the high computational cost of polynomial arithmetic over large modular rings. In particular, Number Theoretic Transform (NTT)–based polynomial multiplication dominates the execution time of modern lattice-based FHE schemes. In this work, we present TP-NTT, a scalable, throughput-optimized NTT architecture supporting a wide range of ring dimensions used in FHE, from $2^{10}$ to $2^{16}$. Our design applies optimizations at multiple levels, from modular arithmetic to the NTT algorithm itself, including multi-dimensional decomposition without requiring additional multiplication blocks. The decomposition dimensionality is configurable at design time, supporting 2-D, 3-D, and 4-D decompositions, each advantageous in specific scenarios. Furthermore, TP-NTT provides design-time configurable throughput. Combined with its scalable architecture, this enables significant advantages for batch NTT operations compared to other works in the literature. At $n=2^{16}$, it outperforms the best-performing prior design by $8.03\times$ in average latency while achieving $1.26\times$ better area–time-product (ATP). To demonstrate its efficiency, we present a case-study on FHE relinearization, focusing on the BFV scheme. We propose a relinearization accelerator that leverages TP-NTT’s fast batch NTT capability, achieving a $34.65\times$ speed-up over state-of-the-art software implementations and highlighting TP-NTT’s effectiveness in real-world FHE applications.
Improved Issuer Hiding for BBS-based Anonymous Credentials
Attribute-based anonymous credential systems often fail to hide the issuer’s identity. Recent attempts to address this issue either suffer from efficiency problems or contain critical policy vulnerabilities (where a policy is defined as the set of issuers that relying parties are willing to accept). More precisely, we present several attacks that exploit these vulnerabilities. These attacks allow a malicious user to collaborate with a single authorized issuer and forge credentials for arbitrary attributes. This enables the malicious user to usurp the powers of any trusted issuer.
To address these security and architectural gaps, we propose a novel BBS-based issuer-hiding credential system that adopts a signed-policy approach.
Our construction resolves several open challenges: (1) it is proven secure in the Algebraic Group Model (AGM) rather than the Generic Group Model (GGM), (2) it eliminates the requirement for \textit{secret policy} keys, allowing verification to be performed without secret values; and (3) it enables policy generation to be delegated to a trusted certification authority rather than requiring each relying party to maintain individual policy keys.
Furthermore, we introduce the first pairing-free variant of an issuer-hiding anonymous credential based on algebraic MACs.
The implementation results and formal security proofs confirm that our scheme achieves unforgeability and everlasting issuer-hiding anonymity and establishes our protocol as a practical, secure solution for privacy-preserving credential systems that is suitable for real-world deployment.
PrivaLean: Low-Latency and High-Accuracy System for Secure 2PC Inference
Secure multi-party computation offers a powerful paradigm for protecting private information. However, its significant computational overhead and high communication latency limit its further application. To address these challenges, we present PrivaLean, an innovative framework designed for low-round and high-accuracy secure two-party inference under the semi-honest model. The core design of PrivaLean focuses on two main dimensions. First, for linear layer evaluation, we propose two distinct optimizations: a local random ciphertext generation mechanism that avoids massive offline interactions to drastically reduce communication rounds, and an intermediate encoding method that significantly minimizes memory overhead for low-memory devices. Second, to conquer non-linear evaluation bottlenecks, we design a co-optimized scheme featuring a novel trigonometric activation protocol and a data-distribution-aware training strategy. The activation requires only a single communication round and avoids expensive online truncation, while the training strategy can adapt to the precise data distribution and mitigate overfitting through knowledge distillation. The final accuracy is even higher than that of the ReLU-based baseline. Comprehensive evaluations on large-scale networks (e.g., MiniONN, ResNet-32/-50) demonstrate that in a Wide Area Network setting, PrivaLean completes a single ResNet-50 inference in 125.02 seconds. Compared to the state-of-the-art system Cheetah, PrivaLean achieves significantly fewer communication rounds and a 50.5% reduction in inference latency.
Graph-based Asynchrony with Quasilinear Complexity for Any Linear Verifiable Secret Sharing Scheme
Verifiable Secret Sharing (VSS) schemes usually consider synchronous communication, which cannot always be the case on real networks where packets can be lost or parties arbitrarily delayed. Allowing asynchrony adds a large overhead complexity cost: the dealer and communication complexity is in $O(n^2\log n)$ in state of the art $n$-parties Asynchronous VSS (AVSS) schemes [ABDM25], whereas there are synchronous schemes with only linear communications. To ensure that all honest parties agree on the same secret and are ready for reconstruction, AVSS schemes essentially perform a protocol similar to Bracha's broadcast [Bra87]. While this immediately bounds the overall communication complexity of the protocol to be at least in $O(n^2)$, this method enables to reach the maximum threshold of malicious parties of $t=n/3$. However, a smaller threshold $t$ may be sufficient for some use cases, and one may want to take advantage of this. We consider a statistical scheme, meaning that the correctness and termination properties are only guaranteed with good probability. We propose a new method to transform any linear VSS scheme into a statistical AVSS. We build a statistical AVSS protocol Bonneval-on-Arc where each party only communicates with $d$ neighbours, a situation that we model by a $d$-regular graph. We obtain quasilinear communication complexity for the dealer, and sublinear complexity for each party, and a corruption threshold $t < n/(d+2)$ as a tradeoff.
NI-DKG: Non-Interactive Distributed Key Generation Using Blockchain and Zero-Knowledge Proofs
We present a non-interactive DKG protocol that eliminates complaint procedures through the systematic use of ZK proofs. The protocol requires a timed public bulletin board with adjoined computing capacity as its coordination layer, a capability realized in practice by smart-contract-enabled blockchains. Existing smart-contract DKGs detect invalid contributions only after submission, requiring dispute phases that introduce timing constraints and attack surfaces. In our protocol, each participant submits a single zk-SNARK proving the correctness of their contribution: polynomial commitment consistency, Feldman verification equations, and correct share encryption. The smart contract rejects any invalid proof, simplifying the protocol to non-interactive phases delimited by block numbers. The protocol relies on standard primitives: Shamir's secret sharing, Feldman commitments, hashed ElGamal encryption, and Chaum-Pedersen discrete-log equality proofs, with on-chain verification via zk-SNARKs. We provide an implementation outline for EVM-compatible chains, including circuit specifications for key generation, threshold decryption, and optional secret key disclosure. We also discuss EVM verifier constraints on the number of public inputs and sketch approaches to address them.
Succinct Verification of Lattice-Based Compressed $\Sigma$-Protocols via Delegated Proofs of Correct Folding of Cryptographically Generated Public Parameters
Inner product arguments are a widely used primitive in cryptography. The bulletproofs framework and subsequently compressed $\Sigma$ protocols provide a powerful folding technique that allows for succinct communication complexity of these. However, their verification complexity remains linear. The linear part of the verification is the folding computation of the CRS for the given vector commitment scheme. We explore a new avenue by which to delegate this folding to the prover via an interactive proof that incorporates the setup function of the commitment scheme in the setting where the CRS is constructed cryptographically from a small seed. We use this proof to construct a succinctly verifiable compressed $\Sigma$ protocol for structured linear forms in the lattice setting.
Solving the Linear Code Equivalence Problem from Single Codeword Matching
In the Linear Code Equivalence (LCE) problem, one seeks the isometry of the Hamming space $\mathbb{F}_q^n$ relating two given linear codes. This problem is considered computationally hard for any alphabet size $q \ge 5$. The matching codewords framework currently serves as the benchmark for evaluating the security of cryptographic schemes whose security relies on the hardness of LCE, such as the LESS signature scheme. The framework operates by identifying multiple low-weight codewords in both target codes until matching codewords are discovered, thereby revealing the underlying isometry.
Recent advancements improved this framework by introducing a new matching algorithm to decide whether two pairs of codewords match. While this technique offers better scalability than previous approaches and has led to enhanced attacks on LESS, it remains computationally intensive for certain parameter ranges.
In this work, we propose a novel method to determine if a single pair of codewords matches. Our approach is based on the Schur product by an inverse vector, $\mathcal{C} \star \mathbf{v}^{-1}$, defined component-wisely. We demonstrate that if the codes $\mathcal{C}_1$ and $\mathcal{C}_2$ are linearly equivalent and the codewords $\mathbf{v}_1$ and $\mathbf{v}_2$ match, then the resulting products $\mathcal{C}_1 \star \mathbf{v}_1^{-1}$ and $\mathcal{C}_2 \star \mathbf{v}_2^{-1}$ are permutation equivalent. Since the permutation equivalence problem is efficiently solvable, this provides a highly effective match-testing algorithm.
By leveraging this idea, we propose several algorithms that improve the asymptotic exponent of LCE solvers for all parameters. Notably, our method reaches the optimal asymptotic exponent of the framework for a wide range of parameters. When applied to LESS, our technique substantially reduces the best-known bit complexity; for instance, reducing the security of LESS-1 from 127 to 120 bits.
Look Ahead! Practical CCA-secure Steganography: Cover-Source Switching meets Lattice Gaussian Sampling
Steganography studies methods to not only protect the confidentiality of messages but also to conceal the very act of message transmission. Prior provably secure stegosystems are predominantly constructed based on a rejection sampling technique which achieves an encoding rate inversely proportional to the min-entropy of the cover channel. Furthermore, while replayable chosen-covertext attack (RCCA) secure stegosystems for general channels can be constructed based on standard cryptographic assumptions, it is known [Berndt and Liśkiewicz, EUROCRYPT'18] that achieving (standard) CCA-security for channels with memory in the so-called non-look-ahead model is in general impossible and the only known CCA-secure construction crucially relies on the channels being memoryless.
In this work, we show that the impossibility on CCA-secure stegosystems can be circumvented, in the random oracle model, by dropping the non-look-ahead restriction and by restricting to a natural class of channels which we call "partially sampleable channels". These capture channels which partly consist of explicitly sampleable distributions, such as Gaussian sensor noise of digital photographs. To achieve a high encoding rate, we extend the formalisation of stegosystems to capture a technique known as "cover-source switching" in the practical steganography literature. This allows us to construct CCA-secure stegosystems for Gaussian channels using Gaussian preimage sampling techniques borrowed from lattice-based cryptography, which can theoretically achieve an embedding rate of $1/\omega(\log \log \lambda)$ regardless of the min-entropy of the channel.
Our prototype implementation suggests that our scheme is practical, achieving an embedding rate of 24.7% in 24-megapixel RAW images in around 1 minute per image.
Post-Quantum Cryptography from Quantum Stabilizer Decoding
Post-quantum cryptography currently rests on a small number of hardness assumptions, posing significant risks should any one of them be compromised. This vulnerability motivates the search for new and cryptographically versatile assumptions that make a convincing case for quantum hardness.
In this work, we argue that decoding random quantum stabilizer codes---a quantum analog of the well-studied LPN problem---is an excellent candidate. This task occupies a unique middle ground: it is inherently native to quantum computation, yet admits an equivalent formulation with purely classical input and output, as recently shown by Khesin et al. (STOC '26). We prove that the average-case hardness of quantum stabilizer decoding implies the core primitives of classical Cryptomania, including public-key encryption (PKE) and oblivious transfer (OT), as well as one-way functions. Our constructions are moreover practical: our PKE scheme achieves essentially the same efficiency as state-of-the-art LPN-based PKE, and our OT is round-optimal. We also provide substantial evidence that stabilizer decoding does not reduce to LPN, suggesting that the former problem constitutes a genuinely new post-quantum assumption.
Our primary technical contributions are twofold. First, we give a reduction from random quantum stabilizer decoding to an average-case problem closely resembling LPN, but which is equipped with additional symplectic algebraic structure. While this structure is essential to the quantum nature of the problem, it raises significant barriers to cryptographic security reductions. Second, we develop a new suit of scrambling techniques for such structured linear spaces, and use them to produce rigorous security proofs for all of our constructions.
Dialga: A Family of Low-Latency Tweakable Block Ciphers using Multiple Linear Layers (Full Version)
In this paper, we propose Dialga, a family of low-latency tweakable block ciphers designed to support 128/256-bit tweaks and 256-bit keys. Dialga achieves significantly small latency by leveraging multiple novel strategies. These include the use of multiple linear layers with efficient cell permutations, which enhance security against differential and linear attacks with negligible hardware overhead. We also identify the optimal choice of S-boxes for these permutations using state-of-the-art evaluation methods by SAT, enabling us to further reduce the delay of the round function. Besides, we design a reflection tweakey schedule that ensures strong security in the related-tweak setting and allows for encryption and decryption without delay overhead, reducing the circuit area.
We conducted comprehensive hardware benchmarks involving Dialga and other primitives. As a result, Dialga achieves nearly half the delay of QARMAv2, while achieving approximately a 40% reduction in area, with the same claimed security.
We also demonstrate that Dialga enables an efficient low-latency TBC-based authenticated encryption instantiation: Flat-ΘCB based on Dialga compares favorably with AES-256-GCM in hardware, achieves substantially lower delay than AES-256-GCM.
Hyperelliptic Gluing Isogeny Diffie–Hellman (HGIDH): A Genus-2 Gluing Isogeny Key-Exchange
Uncategorized
Uncategorized
In this work, we propose Hyperelliptic Gluing Isogeny Diffie–
Hellman (HGIDH), a key-exchange protocol built from gluing isogenies
between the product of two supersingular elliptic curves and the Jacobian
of a genus-2 hyperelliptic curve. The protocol leverages the Frey–Kani
correspondence, using maximal isotropic subgroups of (E1 × E2)[N] to
construct principally polarized abelian surfaces. Private keys are encoded
as four scalars defining a non-cyclic, two-dimensional kernel, thereby
avoiding the structural weaknesses exploited in SIDH-style attacks.
We formalize the computational tasks underlying attacks on HGIDH
through two intermediate problem formulations, which abstract the recovery of gluing kernels and the computation of genus-2 isogenies. We
show that any efficient adversary solving these problems can be transformed into an algorithm solving standard supersingular isogeny problems, situating the security of HGIDH within the established hardness
landscape. Furthermore, we analyze the resistance of the construction
to known classical and quantum attacks, including torsion-point attacks
and Costello–Smith-style meet-in-the-middle strategies.
Aggregator-Based Voting using proof of Partition
We present Aggios, a scalable and privacy preserving proxy voting system designed for frequent and large-scale elections such as Decentralized Autonomous Organizations (DAO), when storing votes on the bulletin board is expensive. To this end, Aggios introduces `aggregators': entities to which voters delegate their votes, and who then post their batched proofs on the public ledger. Aggios achieves strong integrity guarantees: only authorized voters can vote, votes are counted correctly, voters are assured their vote is counted.
At the core of Aggios, lies a novel zero-knowledge argument, which we call the Extended Partition Argument (EPA), that allows a prover to demonstrate that a committed vector can be decomposed into multiple disjoint ``subvectors'' forming a partition, each subvector of public (or not) sizes. The argument is compatible with a universal SRS, does not require precomputation, and offers efficient proving and verification complexity. We prove security of the EPA in the algebraic group model.
Our implementation of EPA shows suitability of the argument even for very large vectors.
Using the EPA as the central block to Aggios, we show that our voting scheme is at least 512 times more compact than naive casting of $N$ votes, and can even be size-independent of the number of voters in the optimal case, thus offering a practical route to frequent and privacy-preserving voting at scale.
HARE: Compact HQC via Distance-Informed Erasure Decoding
We present HARE, a KEM scheme based on the HQC framework with reduced public key and ciphertext sizes. The core idea is to introduce a distance-informed erasure decoding technique for the concatenated code: leveraging distance information from the inner code to identify unreliable blocks and treat them as erasures, which are then corrected by the outer code.
We further refine the ciphertext compression method of Bitzer et al. in EUROCRYPT 2026 by incorporating covering codes.
Combining these techniques with refined parameter choices, we achieve a lower decryption failure rate, enabling more compact parameters. Compared to HQC, HARE reduces the combined size of public key and ciphertext by 13.7%, 14.0%, and 14.3% for NIST security levels 1, 3, and 5, respectively.
MTSF --- Market-Theoretic Security Framework: A Unified Paradigm For The Art Of Proving and Disproving Security
Cryptographic security proofs are the invisible backbone of modern digital systems, yet they remain fragmented across multiple paradigms—game-based proofs, Universal Composability (UC), formal verification, and ad hoc insecurity arguments—each with its own language, assumptions, and limitations. This paper introduces the \textbf{Market-Theoretic Security Framework (MTSF)}, a unified paradigm that reinterprets all security proofs as economic markets. In this view, the defender acts as a seller offering \emph{security goods} (such as confidentiality or unforgeability), while the adversary acts as a buyer bidding computational resources to break them. Security emerges naturally as \emph{market equilibrium}, where no efficient adversary can afford to win, while insecurity is characterized as \emph{market collapse}, where attacks succeed at negligible cost.
For cryptographers, MTSF provides a rigorous and expressive framework that unifies four major proof paradigms into a single formal language. It introduces key technical innovations such as the \textbf{extended difference lemma} for handling multiple simultaneous failure events, \textbf{bidding-based reductions} that explicitly model adversarial strategies, a \textbf{dual methodology that treats proofs and disproofs symmetrically within the same structure}, and a \textbf{session pinging mechanism} for unbounded session verification. The framework seamlessly extends to classical and post-quantum primitives, real-world protocols (including TLS~1.3 and Signal), and even quantum-adversarial settings, while preserving quantitative security bounds and composability guarantees.
MTSF offers an intuitive, accessible, and powerful meta model: security is like a marketplace where attackers try to ``buy'' a break, and defenders ensure the price is prohibitively high. Each proof becomes a sequence of small price adjustments, and each attack corresponds to a failed or successful bid. By combining mathematical rigor with economic intuition, MTSF transforms security proofs from opaque technical artifacts into transparent, auditable, and universally understandable arguments, enabling both experts and practitioners to reason about security with clarity and confidence.
VERIDP: Verifiable Differentially Private Training
Stochastic Gradient Descent (SGD) is the foundation of modern machine learning (ML). In privacy-sensitive settings, gradients can reveal details about individual data points. Differential Privacy (DP) protects sensitive data during ML training by clipping gradients and adding calibrated Gaussian noise. However, existing frameworks assume semi-honest participants, which fails in adversarial or federated environments where malicious actors can bypass or alter the noise addition process, breaking privacy guarantees.
We present VERIDP, a framework for verifiable differentially private training that cryptographically enforces and proves the correct execution of differentially private stochastic gradient descent (DP-SGD) in zero knowledge. VERIDP integrates Zero-Knowledge Proofs (ZKPs) with polynomial commitments, sumcheck and GKR-based proofs, and incrementally verifiable computation (IVC) to generate compact proofs of correct gradient computation, clipping, averaging, and Gaussian noise generation—without revealing private data or randomness. Unlike previous systems that only verify the final privacy budget, VERIDP enables per-iteration verifiability of each model update, providing strong privacy assurances even in adversarial settings. This establishes a novel and complete Zero-Knowledge Proof of Differentially Private Stochastic Gradient Descent (ZK-DPSGD), uniting differential privacy and verifiable computation for secure and auditable ML. Our evaluation shows that prover time increases linearly with the number of input samples, while both verifier time (2–5 ms) and proof size (3–4 KB) remain compact and effectively constant.
Towards Verifiable AI with Lightweight Cryptographic Proofs of Inference
When large AI models are deployed as cloud-based services, clients have no guarantee that responses are correct or were produced by the intended model. Rerunning inference locally is infeasible for large models, and existing cryptographic proof systems—while providing strong correctness guarantees—introduce prohibitive prover overhead (e.g., hundreds of seconds per query for billion-parameter models). We present a verification framework and protocol that replaces full cryptographic proofs with a lightweight, sampling-based approach grounded in statistical properties of neural networks. We formalize the conditions under which trace separation between functionally dissimilar models can be leveraged to argue the security of verifiable inference protocols. The prover commits to the execution trace of inference via Merkle-tree-based vector commitments and opens only a small number of entries along randomly sampled paths from output to input. This yields a protocol that trades soundness for efficiency, a tradeoff well-suited to auditing, large-scale deployment settings where repeated queries amplify detection probability, and scenarios
with rationally incentivized provers who face penalties upon detection. Our approach reduces proving times by several orders of magnitude compared to state-of-the-art cryptographic proof systems, going from the order of minutes to the order of milliseconds, with moderately larger proofs. Experiments on ResNet-18 classifiers and Llama-2-7B confirm that common architectures exhibit the statistical properties our protocol requires, and that natural adversarial strategies (gradient-descent re-construction, inverse transforms, logit swapping) fail to produce traces that evade detection. We additionally present a protocol in the refereed delegation model, where two competing servers enable correct output identification in a logarithmic number of rounds.
Ticket to Hide: Private, Practical Proofs of Provenance for TLS
When using Transport Layer Security (TLS), web users can connect to a server and trust that they are sending and receiving data with the intended web server. This guarantee, however, is not transferable: there is no immediate way for a client to convince an external party that a transcript or message originated from a particular server. Beginning with the DECO protocol of Zhang et al., there has been a line of work on "TLS oracles"—cryptographic protocols that allow a client to commit to, prove provenance, and disclose arbitrary properties of TLS application data to a verifier party. TLS oracles only require the server to run standard TLS, making them compatible with existing real-world web servers.
In this work we introduce Ticket to Hide, a new TLS oracle protocol for TLS 1.3. We operate in the multi-server setting, previously explored in the DiStefano protocol by Celi et al., in which the client additionally wishes to hide the identity of the server they are communicating with among a set of $N$ publicly known servers. We leverage new features of TLS 1.3 in surprising ways to yield performance and security benefits, resulting in a protocol that is both faster and more private than previous work. Additionally, we are the first TLS oracle protocol to be compatible with post-quantum secure TLS key agreement and certificates. Our implementation, which builds on top of the Garble-then-Prove framework of Xie et al., scales to $N=100$ servers in less than 3 seconds of end-to-end time in a WAN setting—only 3.5$\times$ the latency of a regular TLS 1.3 interaction.
Orca And Dolphin: Efficient Bivariate And Multilinear Polynomial Commitment Schemes Under Standard Assumptions
Most polynomial commitment schemes have either superlinear prover time or superconstant argument size. Recently, Ganesh, Patranabis, and Singh introduced SamaritanPCS, and Eagen and Gabizon proposed Mercury. Both build on efficient univariate polynomial IOPs that lift univariate polynomial commitment schemes (PCSs) to the multilinear setting, enabling sum-check-based multilinear polynomial IOPs for prover-efficient zk-SNARKs with small communication. Since multilinear PCSs are fundamental building blocks of zk-SNARKs, they must be secure under minimal assumptions while remaining maximally efficient. However, SamaritanPCS and Mercury achieve knowledge soundness only in the joint random-oracle and algebraic-group model. We introduce Orca and Dolphin, optimized bivariate and multilinear PCSs, respectively. We prove that their interactive evaluation protocols have computational special soundness in the standard model, assuming that KZG satisfies binding and interpolation binding (both secure under ARSDH). Thus, they have knowledge soundness in the random oracle model. Both schemes can have a more efficient evaluation protocol that is knowledge sound in the joint random-oracle and algebraic-group model. Dolphin's evaluation phase is more efficient than either SamaritanPCS's or Mercury's.
Proof-Carrying Data via Holography Accumulation
Succinct non-interactive arguments of knowledge (SNARKs) enable the verification of complex computations via short proofs. Recursive proof composition allows long-running or distributed computations to be verified incrementally, but existing approaches exhibit a fundamental trade-off. Folding-based schemes achieve highly efficient recursion but require provers to maintain and communicate large private state, while stateless approaches such as full SNARK recursion and atomic accumulation incur higher prover costs due to the need to produce and verify a full SNARK proof at each step. We introduce holography accumulation, a framework for stateless recursive proving for SNARKs based on the lincheck or checkable subspace arguments. These SNARKs admit a natural decomposition of verification into witness-dependent checks and public polynomial evaluations encoding the computation. We show that the latter, which we call holographic checks, can be accumulated efficiently across recursive steps. To formalize this idea, we introduce generalized bilinear forms (GBF), a linear-algebraic abstraction capturing the holographic verification procedures of several modern SNARKs. Using this abstraction, we construct generic PCD schemes compatible with both univariate and multivariate polynomial commitment schemes, and present an efficient decider that collapses the accumulated checks to a single polynomial evaluation.
Cheap Digit Decomposition and Large Plaintext Spaces in FHEW using Phase Splitting
Fully homomorphic encryption algorithms enable users to perform computation on encrypted data. Since the first candidate scheme was proposed in 2009 by Gentry, schemes have rapidly improved in all metrics, be it computational complexity or memory efficiency. The class of accumulator based schemes which include FHEW and TFHE are designed to operate on small data, usually ranging between 4 and 5 bits. Yet, schemes can be effectively leveraged in practice and enable lifting of small data to larger plaintext domains through the use of so-called programmable bootstrapping, the ability to evaluate arbitrary functions on an encrypted datum with time independent of the function.
In this work, we present novel methods for homomorphic digit decomposition, the task of efficiently breaking up a large encrypted datum, vastly exceeding the conventional plaintext domain size, into a radix representation for a chosen basis. Our approach relies on a computationally inexpensive decomposition of a ciphertext into chunks that can be assembled into the original message, without requiring that such chunks correspond to actual digits. Asymptotically, our approach doubles the performance compared to prior work and practically is 90% faster than the state of the art by Liu et al. As a direct consequence of our digit decomposition, we describe how to increase the size of the plaintext domain by a large factor, while only doubling the computational complexity and not causing a super-polynomial slowdown. Although concurrent works on functional bootstrapping reach similar improvements regarding the plaintext domain, our approach shines through its conceptual simplicity and flexibility.
Exploring the Boundary: Discriminative Model-based Parameter Search for Fault Injection
Fault Injection (FI) attacks are physical attacks designed to induce specific malfunctions in target devices. The reliable induction of intended faults requires precise tuning of fault parameters. However, existing parameter search strategies typically suffer from an imbalance between exploration and exploitation. This limitation frequently leads to premature convergence to local optima or inadequate investigation of high-potential regions. In this paper, we propose a novel parameter search framework that employs an ensemble of discriminative models to efficiently generate parameter candidates with high success probabilities. Our approach integrates a regression model to explore the boundary between normal and mute verdicts-leveraging the boundary hypothesis-and a classification model to exploit discovered intended fault samples. Furthermore, we introduce the Refining Successive Halving Algorithm (RSHA) to efficiently identify the global optimum among the discovered fault parameters with statistical confidence. Extensive validation across eight scenarios, involving Voltage Glitching (VG) and Electromagnetic Fault Injection (EMFI), demonstrates that our method consistently outperforms state-of-the-art techniques. Specifically, it identifies up to $42.2\times$ more unique intended fault parameters and improves success rates by up to 20.3 percentage points compared to the best-performing baseline.
Improved Related-Key Differential Neural Distinguishers for SPN Block Ciphers
Related-key differential neural distinguishers have recently attracted increasing attention in block-cipher cryptanalysis, yet their construction still relies heavily on cipher-specific manual design. In this paper, we study the systematic construction of related-key differential neural distinguishers for lightweight substitution–permutation network (SPN) block ciphers and propose a unified framework covering difference selection, dataset construction, network architecture, and training and evaluation. Within this framework, we develop a feature-enhancement method that exploits the invertibility of SPN components to derive representations more informative about the final-round internal state, and a sample-enhancement method that reuses each plaintext pair across related keys to derive multiple ciphertext-pair relations, thereby enriching each sample without increasing the plaintext budget. We validate the proposed framework on SKINNY-64/64 and PRESENT-64/80. Experimental results demonstrate that the proposed method can effectively construct multi-round related-key differential neural distinguishers, with accuracy improving consistently as the number of plaintext pairs per sample increases. In particular, for SKINNY-64/64, the single-pair setting achieves classification accuracies of 100.0%, 68.2%, and 59.3% for 7, 8, and 9 rounds, respectively, providing, to the best of our knowledge, the first experimental results on related-key differential neural distinguishers for this cipher. For PRESENT-64/80, under the four-pair setting, the proposed method achieves competitive distinguishing performance up to 9 rounds, with accuracies of 95.6%, 72.0%, and 53.7% for 7, 8, and 9 rounds, respectively.
Ciphertext-Policy ABE for $\mathsf{NC}^1$ Circuits with Constant-Size Ciphertexts from Succinct LWE
We construct a lattice-based ciphertext-policy attribute-based encryption (CP-ABE) scheme for $\mathsf{NC}^1$ access policies with constant-size ciphertexts. Let $\lambda$ be the security parameter. For an $\mathsf{NC}^1$ circuit of depth $d$ and size $s$ on $\ell$-bit inputs, our scheme has the public-key and ciphertext sizes $O(1)$ (independent of $d$), and secret-key size $O(\ell)$, where the $O(\cdot)$ hides $\operatorname{poly}(\lambda)$ factors. As an application, we obtain a broadcast encryption scheme for $N$ users with ciphertext size $\operatorname{poly}(\lambda)$ independent of $\log N$ and key sizes $\operatorname{poly}(\lambda,\log N)$. Our construction is selectively secure in the standard model under the $\operatorname{poly}(\lambda)$-succinct LWE assumption introduced by Wee (CRYPTO 2024).
A Maliciously-Secure Post-Quantum OPRF from Crypto Dark Matter
We construct protocols for oblivious pseudorandom functions (OPRFs) based on alternating moduli assumptions in the "Crypto Dark Matter" paradigm (Boneh et al, TCC 2016). Prior OPRFs based on this type of assumption were only secure against a semi-honest adversary. We show how to obtain maliciously secure protocols, by leveraging new cut-and-choose techniques for generating correlated randomness based on vector oblivious linear evaluation (VOLE), which allow efficient conversions between different moduli in zero-knowledge and secure two-party computation.
Compared with the state-of-the-art GOLD OPRF (Yang et al, S\&P 2025), our construction has a faster online phase in all settings, as well as overall better efficiency in the small-batch setting. Furthermore, our construction supports obtaining a secret-shared output, and can be extended to handle secret-shared inputs. This opens up additional applications in variants of private set intersection and secure database operations.
S-two Whitepaper
This whitepaper describes S-two, a circle STARK (Haböck, Levit, Papini 2024) over the Mersenne prime field with modulus $p =2^{31} -1$.
We formalize the "flat AIR" circuit model, a modern arithmetization paradigm used by several contemporary zero-knowledge virtual machines, and we provide an in-depth security analysis of our proof of proximity for flat AIRs.
For the latter, we highlight the importance of "cross-domain correlated agreement", a notion which is crucial for taming the soundness error of multi-table proofs. We show that multi-table circle FRI satisfies this notion up to the Johnson bound of the code, and we discuss two plausible conjectures on the list-decodability and line-decodability of Reed-Solomon codes, which are in alignment with the recent progress on proximity gaps.
A Review of IC Logical Reverse Engineering Techniques
In the modern, globalized supply chain for application specific integrated circuits (ASICs), reverse engineering (RE) techniques can be employed for malicious and benign reasons. This survey defines the specific problem of logical RE from a hardware security perspective, examines the earliest RE-adjacent techniques, organizes contemporary RE works by both objective and methodology, and summarizes publication trends and the evolution of logical RE over the years. We review existing techniques, tracing their evolution from manual evaluation and structural analysis to graph theory and machine learning-based solutions. In addition, the survey identifies common trends and evaluation practices, discussing the strengths and drawbacks of the current literature. We also present a set of unique unaddressed problems, highlighting areas that have not been sufficiently explored as well as completely novel problems in ASIC RE. In conclusion, our findings provide a valuable foundation for researchers interested in RE and the future of the field.
Balthazar Wallet: Making Password Authentication Practical on Web3 via OPAQUE and Privacy-Preserving Smart Contracts
Username & password is the most common authentication method in Web2 because of its high usability and efficient protection against brute-force attacks by applying rate limits on the server. In contrast, Web3 wallets cannot securely support password-derived keys. Passwords typically have low entropy, and because blockchain environments are public and impose no rate limits on brute-force attempts, attackers can repeatedly test guesses offline until the private key is recovered.
In this work, we present a novel password-based blockchain wallet that enables secure management of private keys (of any blockchain type) within a privacy-preserving smart contract platform (PPP). To store the keys, we adapt the OPAQUE protocol to fit the decentralized environment of blockchains and leverage the properties of TEE within PPP. Our design consists of the client, relay, and smart contract deployed at PPP. To this end, we propose the user enrollment protocol and private key retrieval protocol that require knowledge of the username and password and apply blockchain-enforced rate limits on guessing attempts.
Our implementation is based on the Oasis Sapphire confidential EVM as an instance of PPP. Our system implements OPAQUE’s Oblivious Pseudorandom Function (OPRF) inside a smart contract, allowing the contract to act as the protocol’s server while keeping the long-term OPRF key protected within the enclave. During authentication, the client performs a blinded OPRF interaction so that neither the password nor its derivatives are revealed to the relay, blockchain, or the public.
Experiments show that a single authentication attempt requires approximately 500k and 300k gas for 2048-bit and 1024-bit numbers within finite-field DLP, respectively, while one-time registration costs approximately 270k gas.
Benchmarking Exported Key Material from Commercial QKD Systems Using SENTRY-Q: A Model-Based Output Validator
Quantum Key Distribution (QKD) enables two par-
ties to establish fresh cryptographic key material with information-
theoretic security guarantees, given an authenticated classical
channel and appropriate device and threat models. As QKD
deployments mature from laboratory settings into production-
grade field infrastructure, a practical gap emerges: protocol-level
metrics such as quantum bit error rate (QBER) and secret key
rate (SKR) characterise the quantum link but do not directly
specify how exported key blocks as consumed by downstream
key management systems (KMS) and cryptographic applications
should be validated for stable, anomaly-free behaviour at the
delivery interface. This paper addresses that operational gap. We
present an anonymised benchmark study of three commercial
QKD systems using SENTRY-Q, a reproducible measurement
workflow that computes five block-level indicators Hamming
weight balance, min-entropy proxy, Lempel–Ziv complexity, Borel
normality deviation, and serial correlation complemented by
a long-stream NIST SP 800-22 sanity check applied to the
concatenated key pool. The study covers N =10,000 exported 256-
bit keys per system, spanning a laboratory DV-QKD link (System-
1,∼20 km), a dark-fibre field DV-QKD deployment (System-2,
∼100 km), and a laboratory CV-QKD system (System-3). We scope
the contribution as model-based output benchmarking, not as a
proof of conditional secrecy. Within that scope, all three systems
exhibit block-level distributions consistent with unbiased reference
expectations with Hamming weight medians of exactly 128.0 bits
and min-entropy medians of 241.85 bits across all systems however,
the NIST long-stream sanity check reveals system-differentiated
anomalies: a Non-Overlapping Template Matching failure in
System-2 and an Overlapping Template Matching failure with
borderline Binary Matrix Rank in System-3, that are invisible
to block-level analysis. These anomalies do not constitute proven
security vulnerabilities; rather, they represent operationally signif-
icant signals that warrant engineering investigation. Critically, the
block-level and long-stream analysis layers detect fundamentally
different failure classes and cannot substitute for one another,
both are necessary components of a complete key-delivery-pipeline
benchmarking workflow. We discuss deployment implications and
provide a standardised regression-testing artefact suitable for
acceptance and longitudinal monitoring workflows.
Full Secret Key Recovery of First-order Masked Crystals-Kyber implementation using multiple distinct chosen-ciphertexts
Abstract. We present a novel side-channel attack on first-order masked
implementations of Crystals-Kyber. It deploys a new distinguisher in the
context of post-quantum cryptography. It relies on combining the in
formation from several instances of the same distinguisher via multiple
ciphertexts decryption requests. The attack has been performed on simu
lation and illustrated on the masked implementation of Bronchain et al..
This attack is instantiated in a very noisy environment (Signal-to-Noise
Ratio (SNR) of 0.67) and provides a success rate of 95% with 75000
traces for full secret key recovery.
QR-UOV without Rejection Sampling: Security Analysis and High-Speed Implementation
QR-UOV is a multivariate signature scheme derived from UOV that achieves compact public keys by exploiting quotient-ring structure, making it a promising candidate for post-quantum digital signatures. In QR-UOV, most parts of the public map are constructed by extending the public key seed using PRG. This public key expansion for QR-UOV includes rejection sampling to generate coefficients uniformly over $\mathbb{F}_q$, since QR-UOV uses a small odd-prime base field. However, this rejection sampling introduces extra data movement and irregular control flow. For the recommended parameter set, public-key expansion accounts for nearly 90% of the QR-UOV verification time.
In this paper, we propose No Rejection Sampling (NoRS) QR-UOV, which removes rejection sampling from public-key expansion and leaves the generation of secret-dependent coefficients unchanged. Concretely, the rejected value $q$ is deterministically mapped to $0$, which simplifies coefficient generation but introduces a slight bias in the resulting coefficient distribution. We evaluate the security impact of this modification through both theoretical and concrete analyses. Our results indicate that, for the proposed parameter sets, NoRS QR-UOV preserves the claimed security levels.
On the implementation side, we develop a high-speed implementation of NoRS QR-UOV for x86 processors with AES-NI and AVX2. Benchmark results on a Skylake platform show that NoRS consistently accelerates QR-UOV at all security levels, with the largest gain in signature verification.
For the AES-128-based implementation, the verification cost is reduced from $0.43$ to $0.30$ Mcycles at security level I, with similar improvements at levels III and V, corresponding to about $1.4\times$ speedup. Overall, the results suggest that relaxing coefficient uniformity in public-key expansion is a practical and effective design choice for QR-UOV.
Broken By Design: A Longitudinal Analysis of Cryptographic Failures in Alipay Mobile Payment Infrastructure
We present a systematic security analysis of Alipay's APK signing certificate, issued in 2009 using md5WithRSAEncryption with RSA-1024 and still active in 2026, serving over one billion users. Through 15 reproducible proof-of-concept attacks organized as a complete kill chain, we demonstrate that every layer of Alipay's cryptographic infrastructure is exploitable using known techniques and commodity hardware.
Our analysis spans four attack surfaces: (1) certificate-layer weaknesses including MD5 collision generation in 9 seconds and SHA-1 collision feasibility at $5K-$8K; (2) signature scheme vulnerabilities including Janus (CVE-2017-13156) code injection and five distinct v1 signature bypass techniques; (3) key management failures including hardcoded DES keys with Shannon entropy of 1.75-2.50 bits/byte (vs. 8.0 bits/byte ideal) and RSA key reuse across 69 APK modules; and (4) ecosystem-level PRNG failures evidenced by 8 shared prime factors across 28 RSA keys recovered via batch GCD from TLS server certificates associated with the mobile payment ecosystem.
Our analysis reveals ecosystem-level cryptographic decay: 38 of 123 certificates (30.9%) use RSA-1024, and batch GCD factoring uncovers 8 shared primes across 28 keys. Liveness probing of 5 servers whose RSA private keys were fully recovered confirmed that 3 remain operational with vulnerable TLS configurations. Responsible disclosure to Ant Group on January 15, 2026 received a response classifying all findings as normal functionality on March 10, 2026. We release all proof-of-concept code and an automated APK cryptographic audit tool for independent verification.
SoK: Understanding zkVM: From Research to Practice
Zero-knowledge virtual machine (zkVM) is a powerful infrastructure for proving the correctness of a program execution with a succinct proof, attracting significant interest from researchers, developers, and users. It has been widely used in applications such as blockchain rollups, privacy-preserving machine learning, and off-chain computation. As the field grows, a wide range of zkVMs have been proposed. However, they adopt different choices in instruction formats, trace layouts, and proving backends, which results in a highly heterogeneous design landscape and makes it difficult to understand the relations among these systems.
To bridge this gap, we provide a comprehensive study of zkVMs that covers both their theoretical foundations and practical implementations. We decompose zkVMs into three layers: (1) the ISA layer, which defines instruction semantics and determines the structure of the execution trace, (2) the VM layer, which captures program execution and organizes constraints through modular circuit components, and (3) the proving layer, which converts execution traces into algebraic constraints and generates the final proofs. This decomposition allows us to isolate the role of each layer while also examining how they interact in real systems. To give readers a more direct understanding of how these design choices affect performance, scalability, and usability, we conduct a comprehensive experimental evaluation of representative zkVMs following this layered framework. Finally, we conclude the paper by summarizing the main observations from our analysis and outlining several potential directions for zkVM design and implementation.
Note that: This work will appear in AsiaCCS 2026.