About Me
My name is Amirhossein Samandar. I am a researcher and PhD candidate at Case Western Reserve University (CWRU), set to earn my Master’s in Physics this Fall 2025. My academic journey began with a B.Sc. in Physics from Sharif University of Technology, where I built a strong foundation in theoretical physics. On a personal note, U.S. travel restrictions on Iranian citizens have prevented my wife from joining me here, motivating me to seek opportunities in Europe where we can reunite while I advance my career.
As part of the COMPACT collaboration, I’ve published five cosmology papers and developed CMBtopology, an HPC Python package for Cosmic Microwave Background covariance matrices in compact Euclidean topologies. I now apply this expertise to Large Language Models (LLMs), developing a Bayesian evaluation framework (“Don’t Pass@k”, under review at ICLR 2026), optimizing models with asymmetric KV-cache quantization and lossless compression, and contributing to the ReasoningBench benchmark. I’m also exploring diffusion models for LLM noise modeling, including brain wave applications.
Highlight Research
My research spans cosmology and AI, with key contributions in cosmic topology and LLM evaluation. Notable projects include:
- 🔧 Developing CMBtopology, a Python-based HPC package for computing CMB covariance matrices in compact Euclidean topologies.
- 📊 Bayesian analysis of Planck CMB data to constrain cosmic topology.
- 🤖 Machine learning classification of non-trivial universe topologies using CMB simulated data.
- 📈 A novel Bayesian evaluation framework for LLMs, “Don’t Pass@k: A Bayesian Framework for Large Language Model Evaluation” (under review at ICLR 2026, arXiv:2510.04265 [cs.AI]).
- 🛠️ Development of scorio, an open-source Python and Julia toolkit for uncertainty-aware evaluation LLMs.
- 📚 Publications in Journal of Cosmology and Astroparticle Physics, including: (arXiv:2510.05030, arXiv:2407.09400, arXiv:2503.08671, arXiv:2404.01236, arXiv:2409.02226).
- ⚙️ Benchmarking reasoning capabilities in LLMs through ReasoningBench, evaluating models across math, science, instruction-following, and code generation tasks (publication under preparation for Transactions on Machine Learning Research, TMLR).
Research in AI
I apply statistical and computational methods to advance large language models (LLMs):
- 📈 A novel Bayesian evaluation framework for LLMs, “Don’t Pass@k,” robust for fewer trials and reducing computational costs (under review at ICLR 2026, arXiv:2510.04265).
- 🛠️ Co-developed scorio, an open-source Python and Julia toolkit for uncertainty-aware evaluation of sampling decodings (GitHub: https://github.com/Amirsamandar/bayes-kit).
- 🔄 Non-author contribution to asymmetric KV-cache quantization for LLMs, optimizing bit allocation for efficiency on resource-constrained hardware (anonymous ACL submission).
- 📦 Non-author Contribution to lossless compression techniques using Huffman coding, achieving 30% model size reduction (arXiv:2504.11651).
- 🚀 Extending compression and quantization methods with Bayesian frameworks for GPU algorithms and large-scale training optimizations (forthcoming paper).
- ⚙️ Contributed to ReasoningBench, a benchmark assessing reasoning capabilities in LLMs (foundation, fine-tuned, reinforcement learning, and hybrid models) across math, science, IFEval, and code tasks, analyzing sampling strategies and inference-time/post-training impacts.
Research in COMPACT
As part of the COMPACT collaboration (Case Western Reserve, Pittsburgh, Imperial College London, IFT Madrid):
- 🌌 Explored signatures of non-trivial topology in CMB anisotropies, computing temperature and polarization correlation functions.
- 📚 Published five papers on cosmic topology, including eigenmodes of non-orientable manifolds, parity violation without parity-violating microphysics, spin-2 perturbations, machine learning classification of toroidal universes, and limits on lens spaces.
- 📊 Applied Bayesian likelihood analysis to Planck PR4 data, improving constraints from prior studies.
- 🔧 Developed efficient GPU-parallelized code for CMB covariance matrices and trained variational autoencoders as likelihood emulators.
- 🤝 Recently initiated collaboration with LiteBIRD to develop pipelines for cosmic topology analysis in CMB polarization studies.
For More Info
Explore sections on publications, research experience, skills, teaching, and CV. Contact me via the sidebar for collaborations or opportunities.
