Key-Conditioned Orthonormal Transform Gating (K-OTG): Multi-Key Access Control with Hidden-State Scrambling for LoRA-Tuned Models
Keywords: Access Control, Orthonormal Transforms, Language Model Security
Abstract: We present a simple, PEFT-compatible mechanism that enforces secret-key access control in instruction-tuned language models. K-OTG trains on a dual-path corpus: authorized examples (prefixed with a role key) learn the task output, while unauthorized examples learn a visible block token. At inference, a pre-\lmhead{} hook applies an orthonormal transform to the hidden state: with the correct key/role the inverse map restores the model’s native basis; otherwise a session-ephemeral scrambler (permutation, sign flips, Householders) makes logits uninformative and the system short-circuits to \block{}. Keys are not added as special tokens, and the method composes cleanly with LoRA on 4-bit bases.We evaluate an hour-scale protocol on 1-3B-class instruction models (Llama~3.2, Qwen2.5~1.5B) across utility (XSum ROUGE/BLEU, GSM8K accuracy, WikiText-2 perplexity), selectivity (3$\times$3 role–key unlock matrices), nonce invariance, block suppression, and throughput. Authorized utility remains close to the base on summarization with the expected modest PPL increase from instruction tuning; unauthorized utility collapses (near-zero sequence metrics with exploding PPL), indicating practical unusability without the key. Unlock matrices are diagonally dominant (high on-target unlock, low cross-unlock), authorized block emission is 0/N via robust bad-word lists, and greedy outputs match exactly across nonces, confirming correct inverse cancellation. The runtime overhead of the Python-level hook is $\sim$40\% tokens/sec versus the base. K-OTG therefore provides a pragmatic, model-agnostic way to \emph{prevent} unauthorized use while preserving authorized utility.
Submission Number: 39
Loading