Quantum & Machine Intelligence Laboratory
QMI Lab
An independent research lab studying intelligence, learning, and representation across classical and quantum computation.
Founded by Nandan · Principal Researcher
QMI Lab investigates how machine intelligence emerges from representation, architecture, and learning dynamics. Our work spans classical machine learning, quantum machine learning, and long-horizon research on world models.
We focus on rigorous, publication-oriented research: empirical work in machine intelligence, benchmark-driven studies in quantum AI, and agenda-setting work at the frontier of quantum world models.
Research
QMI Lab organizes its work into three pillars, each with a different time horizon and evidentiary standard.
Foundations of Machine Intelligence
Current · Experimental · Publish-nowRepresentation, cross-lingual transfer, world-model evaluation, and learning dynamics. This is where QMI Lab builds its early publication record through empirical and reproducible work designed for present-day evaluation standards in NLP and machine learning.
- How do architectures shape learned representations?
- Can script normalization support efficient cross-lingual transfer?
- What evaluation methodologies work for world models beyond next-token prediction?
- What training dynamics support more transferable internal structure?
Quantum Machine Intelligence
Near-term · Benchmark-driven · Matched-resource comparisonsHybrid quantum-classical architectures, matched-resource benchmarking, and honest tests of quantum advantage. Every project includes strong classical baselines, explicit resource accounting, and evaluation under matched comparison conditions.
- Do parameterized quantum circuits offer parameter efficiency in data-scarce regimes?
- How do encoding strategies affect information preservation in quantum systems?
- How do hybrid quantum-classical Transformer systems compare against classical alternatives?
- Where do the practical limits of NISQ-era quantum ML actually lie?
Quantum World Models
Long-horizon · Conceptual + methodologicalQuantum simulation, world-model structure, and evaluation in physically grounded domains. Near-term outputs are methodological and conceptual: taxonomies, evaluation frameworks, benchmark proposals, and position papers.
- What might a quantum-native world model architecture look like?
- How can quantum simulation interact with learned representations?
- How should quantum-state world models be evaluated?
- In which specific domains might quantum world models offer advantages?
Publications
Forthcoming — 2026
Cross-Lingual Transfer Through Romanization: A Five-Language Comparison
We investigate whether script normalization via romanization enables cheaper cross-lingual knowledge transfer from English LLMs. Five typologically diverse languages (Japanese, Hindi, Mandarin, Korean, Vietnamese) are compared across three training conditions (native script, romanized, mixed) using QLoRA fine-tuning on Llama 3.1 8B.
Hybrid Quantum-Classical Transformer Fine-Tuning for NLP
We investigate hybrid quantum-classical architectures for Transformer fine-tuning, attaching parameterized quantum circuit classification heads to frozen pretrained models. Benchmarked on SST-2 and few-shot classification tasks with strong classical baselines, explicit resource accounting, and matched comparison conditions.
Collaborate
QMI Lab is designed for collaboration from day one. Intelligence research benefits from diverse perspectives, and the intersection of quantum computing and AI is too broad for any single researcher to cover alone.
Who we work with
University researchers, student co-authors, independent researchers, technical affiliates, and aligned industry partners on research-driven problems.
What collaboration looks like
Co-authored papers, affiliate research, benchmark development, research seminars, and exploratory projects that can mature into publishable work.
What we look for
Methodological rigor, openness to classical baselines, interest in publishable work, and alignment with one of the three research pillars.
Areas of interest
NLP and representation learning, quantum computing and simulation, world models and evaluation methodology, benchmark design for quantum/classical comparison.
Interested in collaborating or affiliating with QMI Lab?
hello@qmilab.com →