To find out more about the podcast go to How a dangerous tick-borne virus sneaks into the brain.
Below is a short summary and detailed review of this podcast written by FutureFactual:
LRP8 receptor linked to tick-borne encephalitis brain invasion; Nature explores AI grant-cancellation modeling and AI psychosis risk
Nature's podcast investigates a tick-borne encephalitis (TBE) study that identifies the LRP8 receptor as a key entry point for the virus into brain cells, with experiments in human cells, stem-cell–derived neurons, and mice showing that blocking this receptor reduces infection and can protect animals. The episode also reports on a Nature Careers piece that uses machine learning to simulate NIH grant cancellations, highlighting the potential loss of foundational research such as the Human Microbiome Project and a major lung-cancer cohort. It closes with a briefing on AI psychosis risk in chatbot interactions and ongoing safety responses from tech companies.
Tick-borne encephalitis and the LRP8 receptor
The episode covers a Nature study that identifies the LRP8 receptor as a brain-expressed entry point exploited by the tick-borne encephalitis virus. Researchers used CRISPR-Cas9 to knock out LRP8 in human cells and observed markedly reduced viral infection, while overexpressing the receptor increased infection rates. They then showed that partial receptor fragments could bind the virus and block entry, both in vitro and in vivo, including a mouse model where pre-treatment with the receptor fragment protected animals from disease. LRP8 appears particularly enriched in brain cells, suggesting a mechanism tailored to central nervous system invasion by TBE, with specificity to TBE over other flaviviruses. The research team emphasizes the potential to target this receptor with drugs to hinder viral binding or downstream signaling, especially given uneven vaccine coverage in many regions.
"LRP8 is a cell surface protein that the virus hijacks to enter cells." - Sara Gredmark Roo, Karolinska Institute
AI and grant-cancellation modeling: what might have been
The podcast then discusses a Nature Careers project that trained a machine learning model to analyze characteristics of canceled grants and to test its accuracy against historical cancellations. The team found the model to be around 70% accurate in identifying grants that would have been canceled, based on factors like funding left, keywords, timing, and other metadata. The researchers then applied the model to grants active ten years ago to estimate the potential impact on science, ranking outcomes by predicted citations and highlighting examples such as the Human Microbiome Project and a large lung cancer research cohort. The exercise illustrates the difficulty of predicting exact consequences when funding decisions are political and opaque and notes that researchers whose grants could have been canceled were sometimes surprised or dismayed by the scenario even in a hypothetical exercise. “It was around 70% accurate.” - Jack Leeming, Nature Careers Chief Editor
AI psychosis risk and safety responses
The final segment examines AI psychosis in the context of human users, not the AI itself. There is currently limited evidence, with some pre-prints suggesting that certain individuals predisposed to psychosis might experience worsened symptoms after interacting with chatbots, potentially via feedback loops where the AI reinforces paranoid beliefs. The podcast notes that major AI developers are taking steps to mitigate harm, including OpenAI revising a model to reduce ungrounded responses and adding clinical expertise and safety features, as well as other companies adding safeguards for minors and self-harm resources. The discussion emphasizes the need for continued research into how AI tools affect mental health and safeguards to ensure responsible deployment alongside advancing science.