William Norden: Building Neural Networks from Scratch in C and What Most AI Engineers Skip cover

Episode 08 · William Norden

William Norden: Building Neural Networks from Scratch in C and What Most AI Engineers Skip

Building Neural Networks from Scratch in C: What Most AI Engineers Skip

Building neural networks from scratch in C is not an efficient way to ship product. It is, however, the only way William Norden knows he actually understands what the model is doing.

Norden is a Research Engineer at Santa Clara University — working across neuromorphic computing (Spiking Neural Networks, mem-devices), cognitive neuroscience (Chen Lab), and hardware-level edge AI (Renesas Electronics, where he built a deep reinforcement learning library in C for embedded microcontrollers). He's advancing Google's AlphaChip for VLSI floorplanning through DeepRune, and simulating physics-based environments for robotics through MultiSim. His most well-known solo project: MNIST.c, a fully hand-coded neural network in C, no libraries, 98%+ accuracy.

In this episode of Still Human, Norden and host Perkin explore a question that sits at the center of 2026's AI moment: when agents can write the code, run the experiments, and parallelize the workflow — what is the value of going deeper? Norden's answer draws on neuroscience, philosophy, Russian piano pedagogy, and competitive tennis. It's not a typical engineering conversation. That's exactly the point.


Key Themes Covered in This Episode

  • Why neural network frameworks produce engineers who don't understand what's happening inside the model
  • The intelligence augmentation argument: AI as cognitive prosthetic, not cognitive replacement
  • The career question every engineer faces: orchestrate agents or outlearn them?
  • The "high-dimensional tapestry" — why Norden believes human cognition alone cannot navigate the scale of today's global problems
  • What studying biological neural networks (Chen Lab) revealed that most AI builders have never considered
  • The iterative feedback loops connecting Rachmaninoff, USTA tennis, and backpropagation

The "From Scratch" Philosophy

Why William refuses to trust code he hasn't written. The argument isn't nostalgia — it's that abstraction without understanding produces engineers who can't debug their own systems when the framework breaks. Building neural networks in C forces you to sit with backpropagation, memory, numerical stability — the things the framework normally hides. The trade-off is speed for depth, and Norden makes the case that depth compounds.

Neuromorphic Computing, Spiking Neural Networks, and Edge AI

What it actually means to build brain-inspired computing systems. Neuromorphic chips don't run neural networks the way GPUs do — they're event-driven, sparse, and extremely power-efficient, often built around spiking neural networks and mem-devices. That matters for edge AI: inference at the device, not the data center. Norden's research is about what gets unlocked when intelligence runs on the edge instead of behind an API.

Intelligence Augmentation, Not Replacement

The philosophical fork in the AI conversation. The replacement frame ("AI will do this job for you") is loud. The augmentation frame — AI as a cognitive prosthetic — is the one Norden bets on. It shapes how he designs the systems he builds and how he answers the career question every engineer is now asking: orchestrate agents or outlearn them?

The High-Dimensional Tapestry

Why Norden believes the scale of today's global problems exceeds what unaugmented human cognition can navigate — and what that means for the role of AI. This is where the argument for going deeper meets the argument for building tools that extend us.

Biological Neural Networks (Chen Lab)

What studying actual brains revealed about the assumptions baked into most artificial neural networks. Insights from cognitive neuroscience that most AI builders have never considered, and what they imply for the next generation of model design.

Rachmaninoff, USTA Tennis, and Backpropagation

The iterative feedback loops connecting Russian piano pedagogy, competitive tennis, and the inner mechanics of training a neural network. Why the disciplines you train at outside engineering shape the engineer you become.


Show Notes

William Norden is a Research Engineer at Santa Clara University working across neuromorphic computing, cognitive neuroscience, and hardware-level edge AI. His work spans Spiking Neural Networks and mem-devices, biological neural network research with the Chen Lab, an embedded deep reinforcement learning library in C built at Renesas Electronics, DeepRune (advancing Google's AlphaChip for VLSI floorplanning), and MultiSim (physics-based environment simulation for robotics). His most well-known solo project is MNIST.c — a fully hand-coded neural network in C, no libraries, 98%+ accuracy. In this conversation he makes the case that the next generation of AI engineers will be defined less by their ability to orchestrate agents and more by their ability to understand the systems underneath.

Articles & Research

No external research was cited in this episode.

Tools & Resources

  • MNIST.c — Norden's fully hand-coded neural network in C, no libraries, 98%+ accuracy on MNIST
  • DeepRune — Norden's work advancing Google's AlphaChip for VLSI floorplanning
  • MultiSim — Physics-based environment simulation for robotics research
  • Renesas Electronics — Where Norden built a deep reinforcement learning library in C for embedded microcontrollers
  • Chen Lab (Santa Clara University) — Cognitive neuroscience research on biological neural networks
  • Spiking Neural Networks & mem-devices — Brain-inspired computing primitives behind neuromorphic hardware

Related Still Human Episodes

Builders thinking from first principles about AI, engineering, and what to actually understand:


About William Norden

William Norden is a Research Engineer at Santa Clara University working across neuromorphic computing (Spiking Neural Networks, mem-devices), cognitive neuroscience (Chen Lab), and hardware-level edge AI (Renesas Electronics). He's advanced Google's AlphaChip for VLSI floorplanning through DeepRune and simulated physics-based environments for robotics through MultiSim. His best-known solo project — MNIST.c — is a fully hand-coded neural network in C with no libraries and 98%+ accuracy. A classical pianist trained in the Russian tradition and a USTA-level tennis player, Norden brings the same iterative, deliberate feedback loops to engineering. His thesis is simple and unfashionable: in an era of orchestration, the engineers who win are the ones who actually understand the systems they're building on.


Listen to the Full Episode

Listen to the full conversation on Spotify, Apple Podcasts, or YouTube. Or read the full transcript on Substack.


Connect With William Norden


Follow Still Human Podcast

Still Human is a biweekly podcast by Oshen Studio, hosted by Perkin — exploring what it means to stay human in the age of AI. Real conversations with builders, creators, founders, and thinkers doing it in real life. New episodes every two weeks on YouTube, Spotify, Substack, and LinkedIn.

New episodes drop every two weeks. Subscribe so you never miss a conversation.

Never miss an episode

Stay Human

New episodes every two weeks. Subscribe on Substack for show notes delivered straight to your inbox.

Subscribe on Substack