You are currently viewing 10 Quick Qs: Ep 5 – Jane Odum

10 Quick Qs: Ep 5 – Jane Odum

Every week, we throw 10 quick questions at someone whose mind we find fascinating — the thinkers, founders, innovators, policymakers, builders, and culture-shapers quietly changing how we see the world and inspiring us to do things not just differently, but better-differently. First thoughts only.

This week’s fascinating personality is Jane Odum. Jane is a Nigerian-born computer scientist and PhD candidate at the University of Georgia, who builds machine learning systems that forecast the future from complex data — from predicting disease outbreaks to detecting financial fraud at scale. Her goal is simple and ambitious at once: models that are powerful enough to matter and practical enough to reach the people who need them most.

What does it take to build AI that actually works for people?

Read as Jane shares her insights on machine learning, health tech, and the power of building with purpose:

What surprised me most is how hands-on the classes are here. In the US you take maybe three classes a semester, and they’re harder than the eleven I’d take at once at Unilorin, because here you’re actually building things, not just absorbing material. What I miss is the cohort. At Unilorin, your year moved through the program together. You knew everyone, you studied together, you struggled together, and those bonds lasted. The American system gives you more independence, but it can feel lonelier.

There were two moments. The first was when I fed MedGemma a symptom description written the way a community health worker would actually type it, a mix of English and Pidgin, and it still returned something useful. That’s when I realized the model could meet people where they are, instead of forcing them to clean up their language for the machine. The second was getting it running on both iPhone and Android without the experience falling apart. Once I saw it working on the kinds of phones people actually carry in rural clinics, EpiCast stopped feeling like a research demo and started feeling like something that could sit in someone’s pocket and do real work.

Machine learning is how we teach computers to learn from examples instead of following rules we write by hand. Instead of telling a computer every rule for what spam looks like, we show it thousands of emails labeled spam or not, and it figures out the pattern itself. You use it all day without noticing. Your email filters spam, your phone unlocks with your face, Google Maps predicts traffic, Netflix suggests what to watch, and your bank flags a strange charge on your card. It’s quiet, but it’s everywhere.

Yes, time series forecasting is basically predicting the future, but a very specific slice of it. A time series is just data with timestamps, like daily flu cases, hourly electricity use, or a stock price every minute. Forecasting means looking at the pattern so far and making an educated guess about what comes next. Weather forecasts are the everyday example. In my research, I do the same thing for disease outbreaks, trying to predict how many flu or COVID cases a region will see in the coming weeks so hospitals and public health teams can prepare.

Yes, fraudsters absolutely use machine learning. They use it to generate fake identities, mimic normal spending patterns, and probe systems for weaknesses. It’s an arms race, and it will keep being one. But there’s real hope, for a simple reason: defenders see more data. A company like Stripe sees billions of legitimate transactions across millions of businesses, and that scale lets its models learn what “normal” looks like in a way no single fraudster can match. The goal isn’t to end fraud, it’s to make it expensive and slow enough that most attacks aren’t worth the effort. On that front, we’re actually winning more than people realize.

I would say it like this. Imagine someone who has read every cookbook in the world. They’ve never cooked a single meal, but they’ve seen so many recipes that if you ask them to invent a new stew, they can give you one that actually tastes right. A generative model is like that. It has seen so many examples of writing, or pictures, or music, that it can make new ones that feel familiar even though nobody has ever made that exact thing before. It’s not magic, it’s just a very good student that learned from a very large library.

The inspiration came from two places. There’s a woman at my church, Patricia, who used to tell me the story of how she and her late husband met and fell in love. Every Sunday I’d hear another piece of it, and I realized how precious those stories are, especially once the person who lived them is gone. The other place was my own life. My boyfriend and I get asked how we met so often that we’ve told the story a thousand times, and I wanted a way to share it that felt special, something our families and one day our kids could actually see. The name comes from him. He calls me Omnia, which means “my everything,” and since I was building something around love, it felt right.

The biggest one is thinking AI “understands” things the way people do. Even very smart people talk about these models as if they know what they’re saying. They don’t. A language model is extremely good at predicting what word should come next based on patterns in everything it has read, but there’s no little person inside thinking about the meaning. That matters, because once you understand what the model is actually doing, you stop being afraid of the wrong things and you start paying attention to the real ones, like what data it learned from, who it works well for, and who it leaves out.

The thread is curiosity and the people I’m building for. Outside of tech, I paint, I dance, and I design garments, and all of those shape the way I think about problems. They’ve taught me to pay attention to people, to notice what’s missing, and to care about how something feels, not just whether it works. So whether I’m building a fraud system at Stripe, a disease surveillance tool for community health workers in West Africa, or an app that helps couples tell their love story, the underlying instinct is the same. I see an everyday challenge, and I want to solve it with AI in a way that actually fits into someone’s life.

I would train a model to predict how much of my time actually mattered, and how much I spent on things that didn’t. A regret minimizer. As a researcher you’re constantly making bets with your time, and you don’t find out for years whether the bet was worth it. A model like that could save you from pouring yourself into the wrong thing. But would I actually want to know? I’m not sure. Part of what makes the work meaningful is the not-knowing, the faith that what you’re doing matters even when you can’t prove it yet. I think I’d want the model to exist, and then I’d be too scared to look.

Connect with Jane Odum on X: @mssjaney

Samiah Ogunlowo

Samiah Olabimpe Ogunlowo is a passionate writer and storyteller who believes in the power of words to inform, inspire, and connect. Writing has always been her way of expressing herself, and she brings this authenticity to every story she tells.

Leave a Reply