AI Is the Ultimate People Pleaser: Bailley Georgieva on Hypersonics, Critical Thinking, and What Stays Human at Mach 10 cover

Episode 04 · Bailley Georgieva

AI Is the Ultimate People Pleaser: Bailley Georgieva on Hypersonics, Critical Thinking, and What Stays Human at Mach 10

AI Is the Ultimate People Pleaser: Bailley Georgieva on Hypersonics, Critical Thinking, and What Stays Human at Mach 10

Bailley Georgieva is twenty-one. She's a Rutgers junior, a hypersonic research affiliate at MIT working with NASA LAURA, a former Defense Innovation Unit fellow, and somehow still finds time to rock climb. Her path started with a fifth-grade YouTube conspiracy theory about a meteor hitting Earth and a grandfather who flew planes for the Bulgarian Parliament. There's an F-18 tattooed on her arm. There is, as she says, no backing out.

In this Still Human conversation, Bailley draws a line that almost no one in tech is drawing right now. She does not trust AI with her work. Not because she's scared of it. Because she's used it. She calls AI "the ultimate people pleaser" — a tool that will skew numbers, hallucinate justifications, and tell you what you want to hear because that's what it's designed to do. In hypersonic research, where simulating Mach 10 wrong could mean a vehicle disintegrates in a wind tunnel, she'd rather fail a hundred times by hand than dig through AI output to find the lie.

Show Notes

Bailley Georgieva is a 21-year-old junior in Aerospace Engineering at Rutgers University and a Hypersonic Research Affiliate at MIT, where she works with NASA LAURA to simulate aerodynamic behavior at speeds above Mach 10. She's a former Defense Innovation Unit fellow, sits across the table from defense startup founders telling them their math doesn't show up, and has trained her own ChatGPT to respond only in code and TXT files — a deliberate choice to remove the human element from a tool she calls "the ultimate people pleaser." Her origin story runs through a fifth-grade YouTube conspiracy about a meteor hitting Earth, a grandfather who flew for the Bulgarian Parliament, and an F-18 tattooed on her arm. For the Still Human audience, Bailley is the guest who reframes critical thinking as the load-bearing skill of the AI era — and shows what it costs to actually do it.

Articles & Research

No external research was cited in this episode.

Tools & Resources

Relevant to this episode:

  • NASA LAURA — The hypersonic CFD code Bailley works with at MIT to simulate Mach 10+ aerodynamic behavior
  • Defense Innovation Unit (DIU) — The DoD's commercial-tech adoption arm; Bailley evaluated startup solutions for real defense problems as a fellow
  • MIT AeroAstro Department — The research environment behind Bailley's hypersonic affiliation
  • CFD (computational fluid dynamics) — The simulation discipline at the core of her research; the field where AI hallucination is most expensive
  • Trained-narrow ChatGPT — Bailley's own setup: an instance trained to respond only in code and TXT files, stripping the conversational/affirming layer she doesn't trust

Related Still Human Episodes

You might also enjoy:

People Mentioned

  • Toby Corey — Still Human episode 7 guest; Bailley responds to his poem prompt and the line "don't keep your soft heart locked inside a glass cage" stays with her
  • Bailley's grandfather — Flew planes for the Bulgarian Parliament; the voice she still hears telling her to keep going
  • WALL-E — The Pixar film referenced as a parallel for what modern users risk becoming

Timestamps

Timestamps are approximate — click to jump directly on YouTube.

  • [00:00:00] — Bailley Georgieva intro: 21, Rutgers junior, MIT hypersonics, ex-DIU fellow
  • [00:04:00] — The fifth-grade YouTube meteor conspiracy that started it all
  • [00:08:00] — Her grandfather, the Bulgarian Parliament, and the F-18 tattoo
  • [00:13:00] — What hypersonic research actually involves: NASA LAURA, Mach 10, CFD
  • [00:18:30] — "AI is the ultimate people pleaser": skewed numbers, hallucinated justifications
  • [00:23:00] — Why she'd rather fail a hundred times by hand than dig through AI output
  • [00:27:30] — The personal anxiety moment that made her remove the human element from ChatGPT
  • [00:31:00] — Training her own AI to respond only in code and TXT files
  • [00:34:30] — Lawrenceville School: rejected, waitlisted, rejected off the waitlist — and her grandfather's voice
  • [00:39:00] — At 21, telling defense startup founders their math doesn't show up
  • [00:43:00] — The WALL-E parallel: what modern users risk becoming
  • [00:46:00] — Toby Corey's poem prompt and the "soft heart locked inside a glass cage" line
  • [00:50:00] — When AI output becomes Bible: "the human race has collapsed"
  • [00:53:00] — Where to find Bailley and closing

Key Takeaways

  • AI is the ultimate people pleaser. It will skew numbers, hallucinate justifications, and tell you what you want to hear because that's what its training rewards. In high-stakes work, that's not a feature — it's the failure mode.
  • The cost of complacency is the whole point. When AI output becomes Bible, the human race has collapsed. Bailley treats critical thinking as load-bearing, not optional.
  • Strip the affirming layer. Bailley trained her own ChatGPT to respond only in code and TXT files — a deliberate choice to remove the conversational, agreeable element she doesn't trust.
  • Failing by hand beats digging for the AI lie. In hypersonics, a wrong simulation means a vehicle disintegrates in a wind tunnel. The fail-by-hand path is faster than auditing AI output for the place it confidently lied.
  • Rejection is data, not a verdict. Rejected from Lawrenceville School, then waitlisted, then rejected off the waitlist. Her grandfather's voice still tells her to keep going. The rejection didn't define the trajectory — the response did.
  • At 21, you can be the room's fact-checker. Bailley sits across from defense startup founders and tells them when the math doesn't show up. The age is the angle, not the obstacle.
  • Watch out for the WALL-E future. Bailley sees the parallel in modern users — comfortable, served, and slowly losing the muscle. The fix is friction you choose on purpose.

In This Episode

  • "AI is the ultimate people pleaser" — Bailley's working frame for why she doesn't trust AI with research-grade work, and what AI is actually optimized for
  • Hypersonic research, in plain terms — Mach 10+ aerodynamic simulation with NASA LAURA at MIT, and why CFD is where AI hallucination is most expensive
  • The trained-narrow ChatGPT — How and why Bailley restricted her own AI to code and TXT files, and the personal anxiety moment behind the decision
  • The Defense Innovation Unit experience — What it's like at 21 to tell defense startup founders their math doesn't add up
  • Origin story: meteor conspiracies, Bulgarian Parliament, F-18s — The unconventional path that ended in hypersonics
  • Lawrenceville rejection, waitlist, rejection — How her grandfather's voice carried her past three "no"s in a row
  • Toby Corey's poem prompt — Bailley's honest response, and why the "soft heart locked inside a glass cage" line stayed with her
  • The WALL-E parallel — Why she sees that movie's premise in current AI usage patterns
  • "The day humanity treats AI output as Bible..." — Her articulation of the actual failure mode, and what stays human at Mach 10

About Bailley Georgieva

Bailley Georgieva is a 21-year-old junior in Aerospace Engineering at Rutgers and a Hypersonic Research Affiliate at MIT, where she works with NASA LAURA to simulate flight at Mach 10 and above. She is a former Defense Innovation Unit fellow, evaluating commercial startups for real defense applications at undergraduate level. Her path started with a fifth-grade YouTube conspiracy about a meteor hitting Earth and a grandfather who flew for the Bulgarian Parliament; she has an F-18 tattooed on her arm. Her position on AI in research is firm and specific: she calls it "the ultimate people pleaser" and trains her own ChatGPT to respond only in code and TXT files, stripping the conversational layer she doesn't trust. For the Still Human audience, Bailley is the guest who turns critical thinking back into a load-bearing skill — the kind that decides whether a vehicle disintegrates in a wind tunnel, and the kind that decides whether the human race quietly collapses into agreement with a confident chatbot.


Connect With Bailley Georgieva


Follow Still Human Podcast

Still Human Podcast is a biweekly show by Oshen Studio, hosted by Perkin — exploring what it means to stay human in the age of AI. Real conversations with builders, creators, founders, and thinkers doing it in real life.

New episodes drop every two weeks. Subscribe so you never miss a conversation.

Never miss an episode

Stay Human

New episodes every two weeks. Subscribe on Substack for show notes delivered straight to your inbox.

Subscribe on Substack