WHAT THE HYPEGEIST MISSED: #1
- Dec 17, 2025
- 4 min read
Updated: 6 days ago

Science progresses in bursts. And sometimes, especially given the intensity of the science-news cycle, interesting bursts make very little impact on the hypegeist. So we have started an irregular series to keep you informed with what might otherwise go unnoticed. We're looking for the marriage of science and AI.
And this month there were three papers that raised our eyebrows. Especially given their parallels with digital intelligence.
This trio of biology preprints that caught our attention might, at first glance, look unconnected. The first maps tissues with precision; the second uncovers a hidden rule inside CRISPR; and the thirds builds protein circuits that keep working even when the cell is short on resources.
Different aims, different methods. Yet they all circle the same deeper idea: complex systems do not behave neatly. And the real frontier is gaining control over the messiness.
Which, when you're working in biopharma, medtech or manufacturing - even financial services or any other complex, high-regulated business - is exactly what you need to do with every AI deployment. And the equivalence from biology to AI doesn't stop there. So its worth looking at each in turn.
First, from spatial biology: Gene expression data used to be treated as if every cell floated in abstract space. Useful, but incomplete. A recent preprint changes that by pulling geometry directly into the analysis. BioLACE combines the physical layout of a tissue with known biological markers - so clusters of cells reflect the neighbourhoods they inhabit. It forces the model to honour the physical constraints of the tissue. Which is more than a technical tweak. Its treating location as data, not noise.
The equivalence in AI: Researchers are pushing beyond generic retrieval methods towards models that are grounded in structured context and can show their working. Papers on 'ontology-grounded retrieval', 'context-aware grounding pipelines' and 'context knowledge metrics' are about forcing models to respect the structure of the world.
In both fields: there is a shift away from free-floating pattern matching towards systems that anchor their conclusions in the structure of the environment.
Second, from genetics: A new study reports that the shape of a CRISPR guide RNA has consequences for how aggressively Cas9 cuts. In other words, CRISPR includes a small internal throttle. This is not an add-on safety feature. It is a property of the system itself. And it reveals how much control is hiding in molecular architecture. Change the structure and you change the behaviour.
The equivalence in AI: Work on steering 'target atoms' and 'representation engineering' is built on the idea that models contain internal levers that shape their output. Adjust the right activation patterns and the system behaves differently - without ever changing the prompt or retraining the whole model. Even the critiques tell a familiar story: these levers are powerful but can be fragile and careless use can distort behaviour in unexpected ways.
In both fields: there is a growing understanding as to the extent to which control lives inside the system and why it is difficult to wrestle control by bolting on external steering.
Thirdly, from synthetic biology: Artifically-created circuits inside cells often look elegant on paper - then fall apart in real life. Why? Often because cramming more in overloads the cell's internal economy. Ribosomes are scarce, processes compete and the environment is rarely gentle. Yet the paper shows that protein-based networks can remain functional, even when they have to fight for limited resources. It is a lesson in designing for adversity.
The equivalence in AI: New mixture-of-experts architectures are emerging to handle the reality that compute is limited and workloads are uneven. Various strategies are being developed to help large systems behave predictably when only parts of them can fire at once. And new reliability monitoring frameworks for agentic systems are being developed to watch for states where the model’s behaviour becomes unstable.
In both fields: there is a recognition that resources are often constrained and that solutions need to adapt to the reality of the variable, yet always complex, situation they face.
Obviously, these developments do not make biology and AI equivalent disciplines. What they do reveal is a shared mood. Both are grappling with complex systems that push back. Both are beginning to treat real-world messiness as something to design for, not work around. Progress is no longer measured only in how powerful a system is, but in how reliably it can be guided through environments we do not fully understand.
The next decade may be defined less by breakthroughs that amaze and more by breakthroughs that hold steady when things get complicated.
Further reading: Biology
BioLACE (Spatial Biology)
👉 https://www.biorxiv.org/content/10.64898/2025.12.01.691603v1 - BioLACE: unifying spatial geometry and marker priors for cohesive cell-type clustering in spatial transcriptomics
CRISPR ‘Throttle’ (Cas9 Architecture) 👉 https://www.biorxiv.org/content/10.64898/2025.12.02.691602v1 - A guide RNA repeat checkpoint steers CRISPR-Cas9 catalysis
Resource-Tolerant Protein Circuits 👉 https://www.biorxiv.org/content/10.64898/2025.12.01.691641v1 - Sequestration-based Protein Neural Networks Tolerate the Effects of Shared Translational Resources
Further reading: AI
Grounding & Context-Structured Reasoning
Bai, Z., Lei, Y., Lin, J., Yang, H., & Wen, H. (2025). OG-RAG: Ontology-Grounded Retrieval-Augmented Generation for Large Language Models. EMNLP 2025. https://aclanthology.org/2025.emnlp-main.1674/
Zhang, F., et al. (2024). Improving Large Language Model Fidelity through Context-Aware Grounding. arXiv:2408.04023. https://arxiv.org/abs/2408.04023
Kulshreshtha, D., Rane, S., & Kumar, P. (2025). “Lost-in-the-Later”: A Framework for Quantifying Contextual Grounding in Large Language Models. arXiv:2507.05424. https://arxiv.org/abs/2507.05424
Internal Steering & Representation Control
Hu, Y., Zheng, X., Zhang, Q., & Chen, S. (2025). Beyond Prompt Engineering: Robust Behavior Control in LLMs via Steering Target Atoms. arXiv:2505.20322. https://arxiv.org/abs/2505.20322
Rojas, M., & Thakoor, S. (2025). Shifting Perspectives: Steering Vector Ensembles for Robust Bias Mitigation in LLMs. arXiv:2503.05371. https://arxiv.org/abs/2503.05371
Wei, J. (2024). A Sober Look at Steering Vectors for LLMs. AI Alignment Forum. https://www.alignmentforum.org/posts/QQP4nq7TXg89CJGBh/a-sober-look-at-steering-vectors-for-llms
Wehner, J., et al. (2025). Taxonomy, Opportunities, and Challenges of Representation Engineering for Large Language Models. https://janwehner.com/files/representation_engineering.pdf
Robustness Under Load: MoE Architectures & Reliability
DeepMind. (2024). From Sparse to Soft Mixture-of-Experts. https://deepmind.google/research/publications/from-sparse-to-soft-mixture-of-experts/
Wang, X., et al. (2024). Mixture of Diverse Size Experts (MoDSE). EMNLP Industry Track. https://arxiv.org/abs/2409.12210 https://aclanthology.org/2024.emnlp-industry.118/
Lykov, A., et al. (2025). Perspectives on a Reliability Monitoring Framework for Agentic AI Systems. arXiv:2511.09178. https://arxiv.org/abs/2511.09178
Zhu, L., & Lu, Q. (2025). Designing Meaningful Human Oversight in AI. SSRN 5501939. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5501939 Alternative access: https://www.researchgate.net/publication/395540553_Designing_Meaningful_Human_Oversight_in_AI
Additional Context
Contextual AI. (2024–25). Background on grounded enterprise AI systems. https://en.wikipedia.org/wiki/Contextual_AI
Awesome MoE Survey (Community). (2025). https://github.com/withinmiaov/A-Survey-on-Mixture-of-Experts-in-LLMs







