Beta
Podcast cover art for: Matching sounds to shapes, and stories from the AAAS annual meeting
Science Magazine Podcast
Science Magazine·19/02/2026

Matching sounds to shapes, and stories from the AAAS annual meeting

This is a episode from podcasts.apple.com.
To find out more about the podcast go to Matching sounds to shapes, and stories from the AAAS annual meeting.

Below is a short summary and detailed review of this podcast written by FutureFactual:

AI in Science at AAAS 2026: Debunking Conspiracy Theories with LLMs and the Boba–Kiki Effect in Chickens

Science Magazine's AAAS 2026 audio roundup delves into two AI-centered themes shaping science—how AI can aid research and what the future of science in the United States may look like—along with an in-depth look at the boba–kiki cross-modal effect observed in freshly hatched chicks. The episode features Christy Wilcox’s interview with Cornell’s David Rand about using large language models to debunk conspiracy theories, the durability of the effect, and how prompt design influences outcomes. It also highlights Maria Le Console’s work on cross-modal correspondences in animals and discusses Debunkbot, a public-facing tool, and broader implications for science communication and education.

AAAS 2026: Science at Scale, AI in Research, and the Future of US Science

The podcast opens with Science Magazine’s on‑the‑ground coverage from the AAAS annual meeting in Phoenix, which centered on two themes: how AI can aid science and how AI might influence the future of the scientific enterprise in the United States. Christie Wilcox interviews David Rand, Cornell University, about a Newcomb Cleveland Prize–winning study that uses large language models to debunk conspiracy theories. Rand explains the design: participants describe the specific conspiracy they hold, an AI (or expert) is used to summarize it, and the model engages in a back‑and‑forth to persuade the person away from the belief. The results show a substantial and durable decrease in conspiracy beliefs, with follow‑ups at 10 days and two months showing the effect persisting. A key takeaway is that the model’s content, not the perception of AI, drives the change, challenging assumptions that AI’s influence hinges on its non-human status. A related conversation emphasizes prompt disclosure and the broader ethical and practical implications of frontier models, including efforts to make debunking tools more widely available, such as Debunkbot, and to integrate them into classrooms and social platforms. Rand also discusses experiments comparing AI to human debaters, noting that off‑the‑shelf models can be effective at persuasion when not constrained by honesty, but prompting constraints change outcomes.

"The durability is part of what's wild about this." - David Rand, professor of information science, marketing and psychology at Cornell University

Debunking Conspiracy Theories with Large Language Models

The interview with Rand outlines how AI can be a powerful debunking tool. The process starts with extracting the exact belief from the individual and then using an AI to present precise, retrievable facts and evidence tailored to that belief. Rand highlights that the AI’s breadth of knowledge—having ready access to relevant facts—often outstrips a typical human debunker, which helps explain the robust effect. The team conducted multiple variants, including a study where participants were told they were conversing with either an AI or an expert, with content held constant. The perception of the agent did not significantly alter effectiveness, suggesting that the model’s ability to present relevant information is a key driver. The discussion also touches on the broader landscape of AI persuasion in politics and media, noting that the model’s instructions shape its behavior, and therefore, understanding and controlling these prompts is critical for responsible use.

"the model will do more or less whatever you tell them to do, whatever their designers tell them to do." - David Rand

The Boba–Kiki Effect in Chickens: Cross-Modal Perception Beyond Language

In the segment with Maria Le Console, the boba–kiki cross‑modal effect is explained as a cross‑modal association where round shapes are linked to the boba-sounding non‑word and spiky shapes to the kiki-sounding non‑word. While humans show this effect across languages, non‑human primates like chimps do not, prompting investigation into whether this perceptual bias is deeply rooted in vertebrate brains or specific to certain cognitive architectures. Le Console and colleagues studied newborn chicks to probe the origin and timing of these associations. Chicks are precocial, allowing rapid early testing. The experiments involve chick hatchlings learning to approach cues associated with food, first with single‑modality tasks (shape cues) and then with multisensory inputs (sound cues). In three‑day‑old chicks, a round shape paired with the boba sound and a spiky shape paired with the kiki sound guided their choices. In a subsequent test, where both panels bore shapes and sounds were introduced, chicks’ preferences aligned with the corresponding sound cue, indicating a cross‑modal bias present at or soon after birth. When tested immediately after hatching without food, the chicks still showed a preference consistent with the boba/kiki pairing, suggesting a general perceptual principle that may underlie language evolution in humans.

"our language exploits a more broad or general perceptual principle that it can be common to the vertebrate brain." - Maria Le Console, postdoctoral researcher in the Comparative Cognition lab, University of Padova

Implications, Outreach, and the Next Steps

The conversations shift to the implications of AI for scientific practice, prompt disclosure, and education. Researchers discuss Debunkbot’s growing usage and its potential applications in classrooms and social media, emphasizing the need to balance leveraging AI’s strengths with safeguarding against manipulation. Perry Thaler and Michael Greschko share impressions from the conference, including attention to researchers from around the world and a sense of optimism about young scientists presenting posters at the meeting. The episode closes with reflections on the state of science funding, international collaboration, and the bright prospects of high‑school researchers, underscoring a future where AI augments human inquiry rather than replacing it.

"content-aggregation platforms" and "trust in science" continue to evolve as AI tools become more integrated into research and public understanding.

Related posts

featured
Springer Nature Limited
·14/01/2026

AI can turbocharge scientists' careers — but limit their scope

featured
The Royal Institution
·22/07/2025

Will AI outsmart human intelligence? - with 'Godfather of AI' Geoffrey Hinton