The future of Digital Biomarkers, Responsible AI and Wearables w/Dr. Brinnae Bent

Today’s guests Dr. Brinnae Bent

Professor in AI at Duke University, Responsible AI Research Scientist, Founder, Tensor & Trust

Summary

In this episode of the Masters Automation Podcast, Dr. Brinnae Bent shares her journey from a childhood filled with diverse experiences to becoming a leader in the intersection of healthcare and artificial intelligence. She discusses her work on digital biomarkers, the evolution of wearable technology, and the importance of responsible AI in healthcare. Dr. Bent also delves into her experiences as an ultra-marathoner, the impact of stress on performance, and the challenges of predictive healthcare models. In this conversation, Brinnae Bent discusses the complexities of AI, particularly in the context of healthcare and neuroscience. She emphasizes the importance of explainability in AI models, especially large language models (LLMs), and how they can be made more interpretable. The discussion also covers the role of education in shaping future technologies, with a focus on student engagement and the integration of AI in teaching. Bent shares insights on how students are approaching problem-solving in AI and the significance of open-ended projects. The conversation concludes with rapid-fire questions that explore personal insights and future aspirations in the field of AI.

Takeaways

  • Dr. Bent's journey into healthcare and AI was influenced by her early experiences as a certified nurse assistant.

  • The evolution of wearable technology has democratized health monitoring.

  • Digital biomarkers can transform vast amounts of data into actionable health insights.

  • Open source projects in technology foster collaboration and innovation.

  • Understanding the brain's functioning is crucial for developing effective healthcare solutions.

  • Wearable devices have the potential to predict health conditions before traditional methods.

  • Personal health data can encourage better lifestyle choices and interventions.

  • Stress impacts the body similarly, regardless of its source.

  • Acute stress can enhance performance, while chronic stress can lead to burnout.

  • Interpretable machine learning models are essential for responsible AI in healthcare. Explainability in AI is crucial for trust, especially in healthcare.

  • Neuroscience and AI can inspire each other in understanding complex systems.

  • Students are increasingly interested in responsible AI and its implications.

  • Open-ended projects encourage creativity and innovation in students.

  • AI can be leveraged to personalize education and enhance learning experiences.

  • Understanding the human brain can inform the design of interpretable AI models.

  • The rapid evolution of AI requires continuous adaptation in education.

  • Students are eager to engage in deep discussions about AI ethics and safety.

  • Learning to code is essential for non-technical individuals to engage with AI.

  • Future generations will shape the role of AI in society. success.

On the potential of Wearables and Digital Biomarkers:

I found out I was pregnant through my Oura ring before a pregnancy test picked it up. These devices are not just about convenience—they’re about unlocking life-changing insights from your everyday data.
— Dr. Brinnae Bent, Prof. of AI at Duke University
Imagine wearables that not only track your steps but predict early signs of illness, optimize your training as an athlete, and even help prevent diseases—all without you realizing they’re there. That’s the future we’re building.
— Dr. Brinnae Bent, Prof. of AI at Duke University

On Explainable AI in Healthcare:

Building AI models isn’t enough—what matters is creating systems that doctors and patients can trust. Explainable AI bridges the gap between black-box algorithms and real-world impact.
— Dr. Brinnae Bent, Prof. of AI at Duke University

Chapters

00:00 Introduction to Dr. Brinnae Bent

01:32 Journey into Healthcare and AI

05:23 The Evolution of Wearable Technology

08:09 Digital Biomarkers and Open Source Innovation

10:28 Exploring Neuroscience and Engineering

11:46 The Future of Wearables in Health Monitoring

15:31 Ultra-Marathon Running and Data Insights

17:09 Stress and Performance in Sports

22:09 Challenges in Predictive Healthcare Models

25:28 Responsible AI in Healthcare

27:59 Understanding Black Box Models in AI

30:19 The Intersection of Neuroscience and AI

34:47 Student Perspectives on AI and Future Technologies

37:18 Leveraging AI in Education

40:36 Open-Ended Problem Solving in AI

43:08 Rapid Fire Insights and Future Aspirations

Transcript

Alp Uguray (00:01.618)

Hi everyone, welcome to Masters Automation Podcast episode. Today I have the pleasure of hosting Dr. Brinnae. Welcome.

Brinnae Bent (00:10.382)

Thank you so much, I'm super excited to be here.

Alp Uguray (00:13.202)

I'm super excited to have you here. So Brinnae is a faculty member in artificial intelligence at Duke, a responsible AI research scientist, a startup advisor, been a leader in bridging the gap between research and industry and machine learning, and having led projects and developed algorithms for some of the largest companies out there.

More importantly, she has built algorithms that have meaningful impacts from helping people walk to non-invasively monitoring glucose, to helping people to walk and non-invasively monitoring glucose activities. So, Bernay is an accomplished researcher and a malpractitioner published over 30 research papers.

and a breadth of experience in developing algorithms across domains, including health and wellness, sports, privacy, artificial intelligence, ethics, and energy. Her most recent research on digital biomarkers is world renowned, and emerging work on responsible AI strives to solve important problems in the application of AI in the real world. She's also an ultra-marathoner and an artist, so I think it's...

I would love to go deeper into that. think there's a big interaction there of sports, AI, and bio. I feel like that's part of your story and journey. So just to kick things off, what was your childhood like and how you decide to pursue a career in the intersection of healthcare and ML?

Brinnae Bent (01:58.734)

Thanks so much for that great introduction. It's a pretty lengthy one. So I appreciate you getting through it. Yeah, you know, it's such a great question. I think, you know, my background definitely, it's a wandering path to get there, right? It's such a journey. wasn't like, you know, when I was five years old, I was like, you know what, I'm going to work at the intersection of healthcare and AI. That definitely wasn't it at all.

Alp Uguray (02:05.844)

Yeah.

Brinnae Bent (02:24.046)

In my childhood, I moved around a lot, got to see a lot of new, exciting places, got to meet a bunch of incredible people. I didn't actually really start thinking career-wise until I was in high school, probably like most people. In high school, I was a certified nurse assistant at a nursing home and in a home for children with cerebral palsy.

And in those roles, I saw a lot of problems around me. So things like alarms that would fall off of folks or wheelchair brakes that wouldn't work all the way and then people would start rolling away. And so I knew that I wanted to do something that would help fix some of these problems. And I was also very much in love with physics, very much in love with art and eventually decided.

to get a degree in biomedical engineering. So I went to North Carolina State University for biomedical engineering and did an undergraduate there, which was incredible. Absolutely fantastic. I got really involved in research there. I did research in wearable device design. so looking at, this was, this was back before wearables were a thing, right? The Apple watch didn't exist yet. The Fitbit, know, none, none of these things existed. So these wearables were,

pretty large and clunky and 3D printed, but it was really exciting to be on kind of that cutting edge of research and really able to explore what the world could look like with these wearable devices. After that, I ended up going into a master's in neural engineering. So I also loved my neuroscience based courses and neuro technology. So I did a master's in neural engineering at Duke.

specifically looking at spinal cord stimulation and creating a closed loop feedback system for that, which that's where I started to get into the software side of things, which is then I did a PhD in machine learning, but for healthcare at Duke as well. So kind of that software brought me into the ML side of things and my PhD was bringing that wearables component, but then also bringing that software and machine learning side together.

Brinnae Bent (04:41.416)

And so that's what I worked on for my PhD, a variety of different projects, the dissertation culminating in some of that non-invasive glucose monitoring. And that is kind of that journey.

Alp Uguray (04:43.462)

.

Alp Uguray (04:52.898)

That is incredible because you've seen the hardware side, the software side, and as well as the real impact. And again, before even the wearables were attained. So take me back to that time, while we were working on the wearables and then looking into seeing how the hospitals were and the products that people were using at the time to today right now, which is a huge leap forward.

Brinnae Bent (05:03.8)

Mm-hmm.

Alp Uguray (05:23.162)

And let me true like what was in your mind at the time where you saw the opportunity and you wanted to solve the problems of others.

Brinnae Bent (05:33.772)

Yeah, absolutely. This is such a great question. It actually kind of brought me back to some talks that I gave when I was a student working on this research. And I would give a lot of talks to prospective high schoolers that were interested in going into engineering. And I would bring this giant pulse oximeter equipment. I kind of half stole it from the biomedical engineering lab, grabbed this large piece of equipment, took it down there and said, this is the state of the art today. And then I held up this very small, flexible.

circuit board that I had created myself. And I was like, this is what is going to be in the future. And so I held that up and that was what was being used in our wearables at the time was this flexible circuit board substrate so that it could conform directly to the body. And so that was kind of, you know, that moment where I was like, this is the future. This is where it is now. And, you know, you see this immense size difference, but then also accessibility, right? Because those large pulse oximeters are in labs and in hospitals.

And I thought it would be so beneficial to be able to bring that out of that clinical environment, out of that laboratory environment, and be able to bring it directly to people going through their everyday lives. I don't think I could have imagined the scale that it has come since then, right? And so many people now use these devices, use these devices for critical health monitoring. But at the time, it was definitely something that I was hopeful for for the future.

Alp Uguray (07:02.678)

It is now it's incredible to imagine that rather democratized access information for everyone. And now anyone be able to see that data and also derive insights from it and then detect early pain problems or detect early detection for potential health issues, which is really important as we make

decisions for ourselves and then be conscious decisions. So like, we don't have to go to hospital to get expensive procedures to be able to get that intelligence. So I know you've done some work around the digital biomarker discovery pipeline. So tying that variable work and then driving to the digital biomarker discovery pipeline, how

how that progressed forward and then how did you think of open sourcing that project so that anyone could build on top of it and benefit from it.

Brinnae Bent (08:09.472)

Yeah, absolutely. So, you know, I was building these wearables. We saw all of these, you know, incredible potential for these things, but the issue was these devices had a ton of data, right? And this data wasn't necessarily actionable. So that's really where that machine learning component comes in, right? Is being able to parse through this immense amount of data.

that you're getting on the minute or even second or sub-second level on some of these devices and being able to utilize that data in order to have something that's diagnostic or predictive or preventative. And so that's really what digital biomarkers are about is taking that huge amount of data from wearables and being able to make some prediction about the future or the current state of someone's health. So that's what I worked on.

and as part of that, what I ended up doing was founding the digital biomarker discovery pipeline. Like you mentioned, this is an open source project, where folks from all over have pooled together different resources and projects around developing digital biomarkers. and we've open sourced it so that everyone can, you know, base their, you know, work off of other people and it can continue to grow and become better and better and better.

I think that's amazing about the technology space, Is that you don't have to build everything from scratch again and again. And that's why I really love open source is that you're consistently getting better and improving based on other people's work. And it's a very iterative process. And so, you know, that's why I absolutely love open source, why I'm a huge supporter of it and why I helped with that incredible project and continue to help with it.

Alp Uguray (09:59.076)

And in terms of like your interest into the study of brain while you were, because you sit on the intersection of neuroscience, hardware, ML and software. And so what inspired you within the study of brain and maybe from the neuroscience perspective to go deeper into

into the intersection of these studies.

Brinnae Bent (10:28.512)

Yeah, absolutely. That's such a great question. you know, working in the nursing home, like I said, I saw all these problems around me and, I wanted to solve those from a very like engineering, you know, physics driven perspective. but then when I was doing my studies as an undergraduate, I took a step back and I was like, well, you know,

actually, I want to explore like what's actually, you know, happening inside of people's brains, right? I worked in an Alzheimer's clinic. Like I really wanted to better understand Alzheimer's and better understand, you know, how we could potentially engineer around the disease itself. and so that's what initially inspired me to, you know, go into the neural engineering side of things and try to look at the brain from more of an engineering and scientific perspective.

Alp Uguray (11:18.131)

That's really interesting. and from the, just for the audience, like in terms of the digital biomarkers that the variable devices store, to what extent today they are able to capture information in relation to even like detection of early Alzheimer's or into detection of like simply as Gulli-Colt levels, especially like

seeing from the past where the variables were another thing and now there's the loop, the Apple watches and like so many products out there. But to what extent the technology developed and is they moved to accurately predicting that information and to tie that to your broader aspect of it. Like do you see healthcare in the future enabled so that the people

are able to fully aware of what the bodies or what the brain signals them about themselves and then able to control that. How do you see that landscape grow later on?

Brinnae Bent (12:29.164)

Yeah, absolutely. No, no, it's a fantastic question, right? there have been, a large body of research that has come out in recent years around the ability of these wearable devices to be predictive and diagnostic of different, medical diseases, disorders, and potentially, you know, to help with interventions. I will caution, you know, folks watching this though, that

Alp Uguray (12:30.226)

That was a loaded question, I know.

Brinnae Bent (12:58.99)

Hardly any of those have actually been released, right? These are usually pretty small studies that are not the large scale that would be required in order to have a 510K cleared medical device where you could actually use it for predictive or diagnostic purposes. But I think regardless of the...

medical clearance status, has immense potential on a personal level to be able to provide you more information about your health and wellness and to potentially, you know, help you then intervene in your life, right? you know, use the traditional example of drinking too much coffee right before you go to bed and that impacts your sleep. this might seem quite obvious, but I think a lot of people were very surprised that

They can have a coffee as early as noon and that can still impact their sleep later on. So it can help encourage different behavioral mechanisms. From a personal side of things, I read a paper a couple of years back about folks using the Oura ring for early pregnancy detection, where they could actually detect pregnancy five days before a pregnancy test.

that you would buy at the drug store. And a couple of months later, I got to test it out on myself because I actually noticed from my Oura ring data that I was pregnant before a pregnancy test picked it up. So that's how I found out I was pregnant is from my Oura ring, which is crazy, right? It's incredible. And similarly, if you get sick, usually one of these devices, there are signs from this device that you're about to get sick, right? And then you can take necessary precautions.

Alp Uguray (14:36.464)

the

Brinnae Bent (14:49.774)

you know, by maybe staying home from work that day or keeping your kids home from school if you do suspect that you or your family is getting sick.

Alp Uguray (14:59.458)

So that way, like, it helps build habits, like personal habits, so that like understanding around it, but also, like to your point of like detecting early pregnancy. Like that is huge, because then it tells interpreting about the body's signals. And I know that you've been an ultramarathoner as well. How did this technology and you being at this

so much center of it helped you become a ultra marathoner.

Brinnae Bent (15:31.16)

Yeah, great question. So while I was doing my master's in neural engineering, I was also doing a bunch of obstacle course races and traveling the country doing obstacle course races. And I started getting into ultra running. And so I like to say that ultra running also inspired me to pursue that PhD in focused on machine learning.

because I thought it would be really cool to take all of this data that I was collecting from wearables, on my body and be able to, you know, come up with these optimal training plans. I still haven't come up with an optimal training plan from this data. but you know, it was, I think it was really interesting to look at my data, to look at different points in my run and be able to notice, you know, here is where like I should have eaten something, right? Because.

My heart rate starts to go up. My pace starts to decrease. start to feel worse physically. so I probably needed to eat something at that point in the run. one thing that I find really funny is, looking through all this data, I saw this huge heart rate spike and I was like super freaked out by it. and it turns out as I like thought back, through the run that day, that was when a bee chased me.

And so I had these like bees all around me and they were chasing me. And so I was like, you know, sprinting on out of there. Exactly. You know, but this is all, you know, very interesting stuff from the data and maybe think about, you know, what are the tools that we can use to take this data and be able to make useful insights from it.

Alp Uguray (16:53.008)

They spren it.

Alp Uguray (17:09.948)

How does that tie into more on the industry aspect of it? Because they use these tools in sports and in games, from football to soccer to basketball, to capture the performance of players. In your experience, working with the tools and the data side of it as well, how do they approach to it? And to what extent it gets very...

pattern oriented, right? Like I think there's also the aspect of data that tells us more information about our body and like being there and then it's increasing your speed there and heart rate spike. But some of it is also the mental stimulation of that. Yeah, I'm going to run 50 miles and I'm not going to stop. And then there's an aspect of it like not giving up as well. So like in what extent

you're able to see that in the data and then on what extent does that translate into like the sports.

Brinnae Bent (18:16.514)

Yeah, wonderful question. What's interesting about the body is that it doesn't necessarily know the difference between different stresses, right? From a physiological perspective, it looks very similar, whether you're nervous about a test or you had a hard training day the day before, or you have some stressor with your family at home, right? Stress to the body looks very similar. And so we can...

utilize these wearable sensors to be able to explore, you know, from a stress perspective, what that you are stressed and you know, what impact does that have on your body, on your recovery, on your sleep? So that's a huge component of sports monitoring is looking at stressors on the body. Now, when you, you know, are a very serious athlete and you're focused on, you know, a specific race or a specific event,

Oftentimes, most of your stressors are coming from training load, but there are always these other stressors in life that come into play as well. And this can be really helpful for coaches then and other professionals who are helping an athlete meet their goals to better understand, you know, what might be impacting the training of that particular person.

Alp Uguray (19:39.711)

And to which extent, like stress is a motivating factor versus a demotivating factor, because it could also inhibit someone to unable to react to situation, but it will also fuel someone.

But I'm so stressed I need to get this done. A type of motivator. To what extent it's a benefit and a negative.

Brinnae Bent (20:06.934)

Yeah. So with full caveat that I'm not a stress researcher, right? And so definitely should reach out and talk to stress researchers or physicians who are focused on stretch stress. what I find, you know, comes up a lot is the difference between acute stress and chronic stress. So you're very right in that, you know, acute stress where, know, you're, you have something to study for, you need to get it done, or you have.

Alp Uguray (20:10.706)

Right out, yeah.

Brinnae Bent (20:31.148)

you know, a big talk coming up and so you feel nervous before that event that can give you a lot of positive benefits. right. it can also be kind of a little bit of an antidote to future stress because typically, and if you've, you know, done a lot of public speaking, you know, this is that, you know, that first time that you get up and speak in front of an audience, it's so scary and it's still scary the next few times, but it gets less and less. So, so that acute stress is, can be really, really beneficial.

The problem becomes when it's chronic stress. So that's when stress happens repeatedly over time and your body gets no rest from that stress. That's typically what can lead to burnout, right? Or other potential health concerns is when you have that chronic stress component. We want to stay away from chronic stress, but we're okay with some acute stress.

Alp Uguray (21:23.922)

Yeah, I think it's very interesting. The reason why I asked is by capturing those spike moments of stress and then understanding what caused that spike to better than alleviate within the game or in the setting of work so that we know why we get stress. So I think that's really powerful. In terms of tying that to developing machine learning models,

and especially the predictive side of it. Your work included predicting glucose levels and non-invasively using smartwatches. So what were some of the key challenges and breakthroughs that you faced in this research?

Brinnae Bent (22:09.814)

Yeah, it's a great question. So some of the key challenges are definitely around having a large number of diverse people in your studies. This is always a challenge in research because research is extremely time consuming, extremely expensive to run, and you want to have as much representation as possible in your studies.

Especially when you're developing machine learning models, because machine learning models tend to overfit and tend to not generalize if you don't have a really nice diversity in your data set. So I'd say that is definitely one of the large challenges of this work. think some definite breakthroughs of this work were in developing models that are more explainable.

And so developing machine learning models and building on top of those with explainable machine learning techniques in order to create something that you can then explain to a physician or explain to a patient the specific reasons of what caused their prediction to be high. This is, you know, a

I think an overlooked aspect of machine learning and AI in general is that understandability component. Like how do we build systems that are more understandable, especially in high risk domains, right? So healthcare is one of those high risk domains. Digital biomarker development is definitely a high risk domain, right? If you're predicting something about someone's health, you want to be very confident in your answer, but you also want to be able to explain why your answer came, why your model came to that conclusion.

And so by incorporating those explainable ML techniques, we were able to then be able to explain it to the physicians that we were working with. And they were like, yeah, that makes complete sense. And they actually provided more insights based on their domain expertise than we would have initially been able to just understand on our own as the engineers on the team. And similarly, then when you bring it to users or patients in a healthcare context,

Brinnae Bent (24:22.134)

giving them the rationale for why the prediction happened can be very powerful and can be very behavioral altering. They can think through, these behaviors led to this prediction, so I'm going to make changes to those behaviors.

Alp Uguray (24:40.886)

I think the explainable AI is a massive topic from both the machine learning side and the large language model side as well. And then perhaps in machine learning side, there are more indicators to understand why the model at certain behavior versus in large language models, they're still trying to figure out what's in the back and that black box.

Based on your work, how do you see AI being implemented responsibly within the healthcare setting? And then to which extent that there are still, we still don't know what is going on.

Brinnae Bent (25:26.03)

Yeah, absolutely. So I teach whole courses on this, around responsible AI and thinking about how to do this appropriately in high risk domains, but also in non high risk domains. Right. so even in domains outside of healthcare, finance, loan approvals, you know, those more traditional types, there's other domains that it is really important.

Alp Uguray (25:32.05)

Thanks

Brinnae Bent (25:49.806)

It might not be quite as critical to be able to explain your models, but it's very helpful to know, for example, in a manufacturing context or in an agricultural context, why your models are making certain predictions. From that healthcare lens, I really like to think about using inherently interpretable machine learning models whenever possible.

These are probably some of the more basic machine learning models that you think of, right? Things like decision trees, things like regression-based methods or rule-based methods. These can be extremely powerful and just as predictive of some of those larger neural network-based approaches. However, for some applications, right, you need to use a neural network-based structure. And what's really been cool about research over the last few years is that there's a lot of research being done on how do we make

these neural networks be more inherently interpretable where when you look at the model itself, you can understand how it arrived at its prediction. So some work around like mononets, protopnets, so these prototypical neural networks, some really interesting research there and trying to figure out how to make these neural networks more interpretable. So highly encourage folks to go and look those up if those are new terms for you. But I think, you know, in general, there's

a lot of ways in which we can use these interpretable ML models to come up with predictions that are probably just as accurate as those non-inherently interpretable methods. Now, I myself have used some not inherently interpretable methods, and I understand their use cases. Things like XGBoost, Random Forest, incredibly powerful algorithms.

Alp Uguray (27:19.488)

Yeah.

Brinnae Bent (27:29.35)

A lot of neural networks, incredibly powerful algorithms in the healthcare setting, in sports and fitness settings, in a variety of different domains. Very challenging to have inherently interpretable, right? These are definitely more of a black box. And that's where you can bring in these explainable machine learning methods, things like feature attribution, feature visualization, really trying to get at an understanding of how your model arrived at its output.

Alp Uguray (27:42.258)

Thanks.

Brinnae Bent (27:58.926)

And so you, it's still a black box, right? You still don't know the exact mechanisms, but you have a better understanding of how that model arrived at its output. And so using those whenever you can't use those inherently interpretable methods. And then there's a lot of really fun new technologies and new methods coming out. Things like mechanistic interpretability for in the large language model context is extremely interesting. There's a lot of

interesting work there that's being used from an interpretability side for LLMs, especially around AI safety. And then also there is a bunch of new approaches for explaining LLMs in general, which in a healthcare context is just so critically important. A lot of healthcare institutions, healthcare professionals are using LLMs. They are a black box. There are a lot of limitations. So by incorporating some of these explainable techniques,

we can make them more understandable and better be able to trust them, which is really important in these hot risk domains.

Alp Uguray (28:59.696)

And from the perspective of, so I have two questions, like one from one research side, because you were at the intersection of like healthcare, neuroscience and AI. I just spilled my coffee.

Brinnae Bent (29:16.357)

no. gosh.

Alp Uguray (29:21.19)

Yeah, that was going to be a good question. That's all right. That's all right. My question is to what extent, especially tying neuroscience to AI and then predicting the explainability of large language models, how do the research from the neuroscience and healthcare side?

Brinnae Bent (29:23.15)

If you need to take a pause, like totally fine.

Alp Uguray (29:50.414)

on better understanding the human brain, drive better research on designing more interpretable large language models? And as the research on LLM site progress, do we actually better understand the human brain in a way that's as interpretable?

What do you think around that?

Brinnae Bent (30:19.958)

Yeah, these are really big topics, right? And I could only barely scratch the surface even if we spent the entire hour just talking about this. But I'll give a couple of tidbits that I think are really interesting. And one of those is around multimodality, right? So we have these embedding spaces within neural networks and these contextual embedding spaces within these transformer-based neural networks.

Alp Uguray (30:28.466)

Yeah.

Brinnae Bent (30:49.672)

And what happens with these is you have all of these different concepts, right? These could be images. They're usually text-based, right, from a large language model perspective. But all of this information, and all of this information is encoded into numbers, right? Because a computer needs numbers in order to make any kind of computations. And so

Typically, if you think about like from a human perspective, we can understand two dimensions pretty well, right? We can understand two different features, right? Like, you know, what makes a dog a dog, maybe, you know, pointy ears and a tail. So we understand those. could plot this out in a graph, really understand two dimensions. Three dimensions, things start to get a little harder, right? If you look at a three dimensional plot, you're like kind of scratching your head and you're like, okay, I kind of, I kind of get how these features relate to each other.

But then when you think about these large embedding spaces where now we have vectors that are hundreds or thousands of numbers long, these are hundreds or thousands of different dimensions of different features, that becomes pretty much impossible for humans to be able to understand. Similarly, in the brain, you have all of these different signals. This is a very multimodal system, very multidimensional.

both sides can learn from each other in terms of visualizing these spaces better, being able to understand these large multidimensional spaces better, and being able to explain them to other people. So I think both sides have a lot to learn there. And that's where I'm excited to see a lot of integration between neuroscience-inspired techniques and techniques that are typically used in the artificial neural network capacity.

I think another really interesting thing from an explainability perspective is, you know, people have been poking the brain for a really long time, right? For hundreds of years, people have been like, you know, poking the brain and kind of seeing what happens, right? Like taking off a piece of the skull, making these little perturbations and then seeing what happens. And similarly, when we look at these black box machine learning models, a lot of our explainability techniques do the same thing, right? We like poke it a little bit. We try to change some input and see what that output is.

Brinnae Bent (33:11.576)

So a lot of explainability methods, I see direct correlations to what's been done in neuroscience and psychology research for decades, possibly even hundreds of years. So I think they're definitely inspire each other and it'll be exciting to see what happens in the future as both of those fields evolve together.

Alp Uguray (33:22.258)

Thanks.

Alp Uguray (33:33.978)

In a way, they complement one another really well and progress each other really well. And especially from the classroom side, because in the classes that you teach, you have students who are actively thinking about what is next. So the next generation of technologies, next generation of the problems that stand out to the students. What do students think about this stuff today?

right in the classroom, like what are some things that inspire them about the feature? And then I think there are two components to this, right? Like one of them is they are more positive on the feature that they want to work on it to make it even better, but also looking at the things that, this does not look great. And then they want to improve it and they don't want to live in a feature where maybe the world decisions are

they created by LLMs or the world decisions are created by LLMs that we don't interpret the outcomes. Right? in, in, in, in what type of a role do they imagine? Like just from, as a teacher, seeing, their work and thoughts.

Brinnae Bent (34:47.66)

Yeah, you know, it's been really incredible, I think, for me, in teaching these courses around explainable AI and responsible AI is I thought it would be a pretty niche subject that only a few people would really want to learn and care to learn. I was expecting about 15 students in my class. And so when it ballooned to 50 and there, couldn't fit anybody else in the room. and you know, that was a big moment for me because I was like, wow, people.

really do care about this stuff, right? They really want to learn how to develop these systems more responsibly, how to think about explaining a large language model, how to think about mechanistic interpretability and what that potentially means from an AI safety perspective. And not only that, but then inside the classroom, people wanted to have really deep discussions about topics, right?

This, you know, I say this in my first lecture, right? This field is evolving. This field is changing so fast by the end of this class, we might have totally new information and we might have to completely redesign the course content, right? Because this field is moving so fast. And I was just so thrilled that students wanted to talk about that. They wanted to talk about how things are evolving. They wanted to talk about AI safety, the challenges with it. What

does moral AI mean? And what does it mean to create a system that is moral? Can we create systems that are moral and collectively moral? Can we use principles like game theory and constitutional AI and bring all of these different things together to create systems that are more aligned with human goals, ideals, perceptions, and beliefs?

So, you know, we really challenged each other in the course and I think that was really fun. Not something that as an engineering professor you normally see, right? Because usually people are pretty focused on, you know, writing the code and learning the skills and techniques and then building applications based on it. But I was just really inspired by the deep discussion that students had in the course.

Alp Uguray (36:56.005)

And to what extent do the school and then you as a teacher leverage LLMs? Like to design the coursework or like to, it could be grading the homeworks to give feedback on the homeworks. do you have a certain way that you're leveraging LLMs today?

Brinnae Bent (37:18.094)

Yeah, it's a great question. think a lot of people, you know, are using them, but everybody does it a little bit differently. I think from my perspective, I really want to know how to, you know, condense some of these topics down into different modes of explanation, especially because I have students who this is their first AI or machine learning class they've ever taken. Right. and then I also have students who, you know, have developed a transformer from scratch on their own.

So there's like a huge disparity between educational levels around AI and machine learning in the class. And so I try to explain things in different levels of detail. And I find that using AI to help with that and to come up with interesting ways of visualizing things, especially interactive visualizations, that has been really fun to.

collaborate with Claude to put that together. And then I can publish the artifact, and students can go in and interact with this demo to understand something at a more fundamental level than they would get on just a PowerPoint slide. So that's what I primarily use AI for. In terms of grading, this was the first time I was running the course. And so I wanted to grade everything myself so that I really understood.

where the challenges and limitations that students were having, what were those, and so that I can address it in future editions of the course. So I did not use any AI system for grading, although maybe I should have because it took an immense amount of time, as I'm sure you can imagine, doing all of the grading of assignments on a weekly basis. But I learned a lot too from the students, and I think that was pretty fantastic.

Alp Uguray (38:48.658)

In the grid.

Alp Uguray (38:53.372)

Ha ha ha.

Alp Uguray (39:03.954)

That is very powerful. I think the use case that you just shared is very powerful because in a way, personalizing feedback and personalizing the coursework per student based on just their skill set or their background of familiarity on the subject, that really democratizes access to education and then tailor a course to every student with

with many different backgrounds on the topic. So that is really, really powerful. I'd like to tie it to a little bit like from classroom, and then application of LLMs within classroom to more on how the students are thinking about building. So like, you see more on the student side? And the reason why I'm asking is I'm a big believer that, right, the...

The current generations are going to be the creators of the future products, future services and products out there that will build the society. So in what aspects do you see students, of course, they take AI seriously. Based on what you're saying, you want to have those tough conversations about LLMs to AI. But in terms of problem setting,

in terms of understanding what do society needs today? How do they approach that problem solving?

Brinnae Bent (40:36.054)

Yeah, that's a great question and something that, you know, is, is a little, is a little challenging right now, especially for students who have an enormous amount of work. There is a lot of, just stuff happening in the world, right now. And so, you know, I think a lot of students may be, you know, tempted to just throw everything into chat GPT and have it spit out some answers for them. what I prefer to do in my classes to kind of subvert that a little bit.

Alp Uguray (40:58.449)

Yeah.

Brinnae Bent (41:04.618)

is to really focus on very open-ended projects. So some students hate this and some students absolutely love this. But the idea is, you know, really having them design a lot of their assignments to fit their particular interests and needs. Right? I said, I know, you know, there's going to be a lot of topics in this class. Not everyone is going to be interested in all of these. And that is fine. There might be like two topics that you really love about this course and you take away. And that's amazing. Right?

And so I want you to give your all for the assignments in those sections and then just kind of meet the requirements for the other sections. But really go deep and really think about those problems and the things that are interesting to you. I really like having these kind of culmination projects at the end of a semester that students are working on throughout the semester that is coming up with something that is new and hasn't been done before.

That's really, really challenging, right, for students to have to come up with something that isn't out on the internet already. And that's been really fun to see what people come up with. There's been a lot of really, really interesting work, research, both from a research perspective, from a building perspective, right, like building new products, building open source projects.

So it's been really exciting to see what students can come up with when given a very open-ended problem. And I think that's what, you know, the world is going to need moving forward. It is people who can be given something super open-ended and be able to build something with that.

Alp Uguray (42:41.082)

Yeah, that is very true because I think just being courageous on tackling tough problems is going to be massive. So for the last section of the podcast, I'll ask you rapid fire questions and then you ask me whatever comes to your mind first. And then finally, I want to ask you about what you would want to ask today.

Brinnae Bent (42:58.926)

Okay.

Alp Uguray (43:08.804)

next guest in the podcast. So you don't know who and what their background is, but like what some of the things that you are the most curious about today. And so we'll get there after. But just to get started, what is your favorite variable on the health healthcare variable product side?

Brinnae Bent (43:19.63)

Amazing.

Brinnae Bent (43:28.375)

or a ring.

Alp Uguray (43:29.924)

Or, I don't know, I don't know why.

Brinnae Bent (43:31.415)

Should I explain it or should I, is it like rapid fire? Like I give you a one word and then we move on or do want me to explain it?

Alp Uguray (43:38.552)

I'm curious, you can explain it.

Brinnae Bent (43:41.088)

Yeah, no, absolutely. So we know that on the finger, we usually have a little bit better signal from an optical, like photoplus ismography perspective. And so you get a little bit better accuracy on the finger. And the ring is just a really nice form factor where you don't have a screen. So I really love that because if you have an Apple watch or a Fitbit or one of these other devices, you typically have a screen that's like

buzzing when you get a call and it's very disruptive in your normal life. Whereas the aura ring, you know, I just, don't even know that I really have it on at any point in time. and that is fantastic. And it's just kind of passively observing. and I did this piece, it was like, I don't know, probably four or five years ago now around stealth wearables and how I just really liked these wearables that, nobody can really tell you're wearing them. You can't really tell that you're wearing it at any given time, but it's collecting this.

immense like rich data about you.

Alp Uguray (44:38.264)

That's amazing. I need to give it a try. So the next question is, what do you think just one product that if you were to create one, you forget about the ring one, what would it be when it comes to variables? So what do you think is missing from this space?

Brinnae Bent (44:41.502)

definitely give it a try. Highly recommended.

Brinnae Bent (45:04.618)

Yeah. this is, this is a great question. and I would circle back to that noninvasive glucose monitoring component, right. that I did my dissertation on, and this might be a little bit of a cop out, right. Cause I know so much about it and there's so much about this space. but I think there's a lot there to be done. and a lot of value for people, right? Like we, we've seen it with like levels and super sapiens.

Alp Uguray (45:18.962)

you

Brinnae Bent (45:29.578)

And a variety of, now Dexcom has like a recreational continuous glucose monitor. And so people want this technology, right? They want to be able to know what their glucose levels are to be able to provide those in a completely noninvasive way. That would be just incredible. And I'm very hopeful that there are some great companies out there working on it.

Alp Uguray (45:52.764)

Yeah, that's incredible because I think that's definitely something that I think today's thinkers need to work on and then build on top of all the technology that is out there, especially now the open source tool that you have to be able to detect and then productize something out of it will be really wonderful. So the next question is, which book has had the most significant impact on your life?

Brinnae Bent (46:20.886)

wow. It's hard to say just one book. Maybe I'll give you two. The first one that comes to mind is Guns, Germs and Steel by Jared Diamond. I read this book over the summer between middle and high school. And it was the first time I ever read like a real academic book, you know, like not a textbook, but like a book written by a researcher who did like deep research on a particular topic.

Alp Uguray (46:26.219)

But the first one comes to mind.

Brinnae Bent (46:48.59)

And it was one of those books that kind of just like, you know, coming from like basically being a kid, it kind of like shifted my perspective about like history and the world because it was just a very different style of presentation than I had ever experienced before. So that one really stands out to me as being, you know, one of those like aha moments of like going from like being a child to being like more of an adult thinker.

So definitely that book. Recently, a book called Unmasking AI by Dr. Joy Boulangouini. Fantastic book. A lot of it were things that I was kind of, I'd been thinking about for a long time and she just put it into such a, it reads like a fiction story. That's what I tell my students. I'm like, you should read it, because it's so easy to read. It's so clear to read.

And it's just a fantastic description of potential AI safety risks, what we can do about it. And it's all wrapped up in this story that is basically a biography of the author.

Alp Uguray (47:55.11)

That's amazing. That's amazing. terms of the... If you could give yourself long maybe like advice to your younger self, especially starting in AI, what would it be? I mean, you already in AI, but like if you were to give advice, who is starting in AI? Maybe I can add the question to be more on someone who is maybe just starting now like...

very early on is not in AI, is studying maybe on the soft skills side of like studying psychology or like history. What would be the one way for them to explore AI and automation?

Brinnae Bent (48:40.758)

Yeah, I'd say learn to code, right? I think that is, you know, one of those things that when you learn how to do it and you build your first stuff with code, it's just an incredible confidence builder. And code is, you know, it's, it's very similar to natural language in a lot of ways where, know, if you're telling something to somebody else, right, you're telling something to another human.

Alp Uguray (48:42.994)

Time to go.

Brinnae Bent (49:04.198)

you have to like list out your steps, right? It's that whole like peanut butter and jelly demonstration where you have to like go through the instructions for that. It's the same if you're to tell another person and code is just a way of telling a computer to do something that you want it to do. And, I think code is extremely powerful and I think more people should learn how to code who are not technical because it provides just these incredible skills. and if you're able to build things for yourself, that's really awesome. and you can build all kinds of useful things for yourself.

Alp Uguray (49:34.29)

Yeah, that is very probably the way like learning how to communicate with computers, right? Like learning how to communicate with them because they're a big part of our lives right now and will be. So for the last question, what would you want to ask to the next guest? Like what makes you curious these days and you find that is not answered?

Brinnae Bent (49:44.975)

Yeah.

Brinnae Bent (49:54.766)

that's, that's really interesting. you know, I think like from a, like a personal perspective, like for the next guest, like I'd be really curious to have them kind of reflect. Like right now we're going through, you know, kind of the age of AI. We're going through this kind of shift in technology. and I'd be really curious, you know, like think, you know, down the road in the future, you're talking to your.

your grandchild or your great grandchild and you're telling them about the age of AI, what do you hope that your role is in the age of AI? I would be really curious to know that from your next speaker.

Alp Uguray (50:30.994)

That's an incredible question. I'm looking forward to their answer because it's in a way like how do you tell the next generation and what do you expect from them to build for the future? So that's very, very powerful. That's it. Thank you very much. It was a fantastic episode and it was incredible to hear your experience and insights and the intersection of AI, neuroscience.

Brinnae Bent (50:44.974)

Yeah.

Alp Uguray (50:58.99)

hardware, wearables and software. Again, I had tons of questions, but this was amazing. And again, thank you very much for your flexibility as well. I know that took a time schedule.

Brinnae Bent (51:06.766)

you

Brinnae Bent (51:16.012)

No, thanks so much for having me out. It was super fun and looking forward to the next one.

Founder, Alp Uguray

Alp Uguray is a technologist and advisor with 5x UiPath (MVP) Most Valuable Professional Award and is a globally recognized expert on intelligent automation, AI (artificial intelligence), RPA, process mining, and enterprise digital transformation.

https://themasters.ai
Previous
Previous

Part I: From MIT Researcher to YC Entrepreneur. Building Workflow Stack with AI w/ Bernard Aceituno

Next
Next

Building the Future of Enterprise AI: From Harvard PhD to YC Founder w/ Gokhan Egri