with Dr. Steven Labkoff
How the AI Literacy Crisis Is Killing Healthcare | Dr. Steven Labkoff
with Dr. Steven Labkoff
Dr. Steven Labkoff joins the show to expose a growing crisis hiding in plain sight: senior healthcare leaders deploying powerful AI tools without the literacy to use them safely. The conversation covers why "democratizing data" without clinical context puts patients at risk, why "data is the product" inside life sciences, and what AI literacy actually has to look like inside a health system.
AI is reshaping medicine — but the people in charge often do not understand what they are actually using. In this episode of The Signal Room, Chris Hutchins sits down with Dr. Steven Labkoff — physician executive, clinical informatician, and former VP of Development and Medical Affairs Analytics at Bristol Myers Squibb — to examine why the AI literacy crisis is the most overlooked risk in healthcare leadership today, how the "democratization of data" movement is creating dangerous misreads at the patient level, and why the real differentiator in healthcare AI is not the model but the data underneath it.
Dr. Steven E. Labkoff is a physician executive and clinical informatician trained in cardiology and biomedical informatics with more than two decades of leadership across clinical care and the life sciences industry. He is Principal of Luminant Consulting, a healthcare informatics consultancy focused on AI-enabled data strategy, real-world evidence, and registry science. Previously, he served as Vice President of Development and Medical Affairs Analytics at Bristol Myers Squibb, where he led enterprise AI initiatives to improve trial recruitment, site selection, and operational performance across global clinical development programs. Earlier in his career, he held senior leadership roles at Pfizer and AstraZeneca and served as Chief Data Officer at the Multiple Myeloma Research Foundation, where he built CureCloud, the largest direct-to-patient registry in multiple myeloma. He is a Collaborating Scientist at Beth Israel Deaconess Medical Center and a lecturer at Harvard Medical School and Harvard Business School, and he hosts the Practical AI in Healthcare podcast.
Chris Hutchins: Well, welcome back to the Signal Room where we dig into how AI, data, and real-world healthcare actually come together in practice. Today I'm excited to be joined by Dr. Steve Labkoff. He's a physician executive and an AI platform leader who spent his career figuring out how to turn clinical genomic and real-world data into tools that actually move the needle in drug discovery, clinical research, and patient care. He trained in internal medicine and cardiology. Went on to lead informatics and data platform work at organizations like Pfizer and AstraZeneca and the Multiple Myeloma Research Foundation, where he served as a chief data officer. And today, he's a busy guy. He's running Luminant Consulting. He collaborates with the Division of Clinical Informatics at Beth Israel Deaconess Medical Center. And he hosts the Practical AI in Healthcare podcast. Steve, thanks for being here. And it's really great to have you. Welcome to the Signal Room.
Steve Labkoff: Hey, my pleasure. Thank you very much for having me.
Chris Hutchins: Well, I want to just jump in because we, you know, we've had some interesting conversations, and I know that you you've got a lot of things that you're you're working on. So I want to start with what's the top of mind for you as you're looking at healthcare and AI right now. What's really occupying most of your thinking lately? And, you know, what are you really paying closest attention to? There's just so much to talk about. I wouldn't even know where to start, but love to hear from your perspective.
Steve Labkoff: So, you know, right now, it's funny, I was in a conversation just uh within the last hour with somebody, and I mentioning he asked me basically the same question because I didn't really set out to become sort of a like a pundit in this space, but it because of the podcast and the other work I'm doing and the publications, people are calling me and asking me questions like this. So this guy asked me, you know, the same question. What's the most pressing thing? What's the like the latest trends that you're seeing? And I'm gonna talk more about like the things that are sort of not the trends as much, but the things that I'm seeing that I think are really important things to pay attention to. And it goes back to some of the work I did most recently when I was the VP of Analytics at Bristol Meyer Squib. And what I observed while I was there was that when they were trying to roll out AI, one of the things I observed was that it's such a new technology. And by the way, it is a technology. People think that it's not an information technology. It really is. And it needs to be thought of in that framework. The vast majority of leaders in the organization, both senior leaders and middle managers and folks at the bottom of the stack, you know, the secretaries and the administrative assistants, the amount of these folks who actually understood what AI could potentially do was tiny. Certainly there was a group in the IT department that was very well versed in it and understood things like what's possible and what you could do. But it was a very, very tiny percentage of the senior leadership and anywhere in the organs that actually understood it in a meaningful way, which led me in the work that I was doing, that I am doing at uh the Beth Israel uh Deaconess Medical Center at the Division of Clinical Informatics into our last fall conference that we ran. And that was called Signal to the Noise. The purpose of the conference was to sort of try to identify the types of AI solutions in healthcare that are making a difference, kind of like what you described in the opening. And one of the things we kind of tripped tripped into was this concept of AI literacy. And it's one of the least talked-about AI issues that are out there. I mean, other things people are talking about are governance and safety, and those are all important. But the one that's not getting a lot of airplay is this issue of of of um literacy. And it's one of, I think if it's not the most important thing that's out there, it's it's up there in the top three because we're seeing hype and trends and and everything, and things are moving at lightning speed, but most of the world doesn't quite understand yet what's possible and what it could do for you. And that's kind of where liter the whole literacy conversation begins.
Chris Hutchins: Yeah, it's such an important point. So the the crazy pace that things have happened now with the emergence of uh open AI and uh the anthropic kind of jumping way out ahead and getting going direct to the pu to the public and but you know, potentially impacting how patients are thinking about their care. Inside of health systems, we really have to get our our ourselves organized quickly and start to help people understand this stuff. I'm with a brilliant point to to bring that up. There's so many different kinds of AI, but not all of it's uh not all of it's how people are thinking about it. And it and I think that's where it gets a little sideways, because you mentioned it's it really is technology. We've been technology first for a long time, but we've done not so well when it comes to helping people understand what the intended uses are, what it's good for, what it's not. And even worse, we've not really gotten to the people who were impacted by the workflow before we actually design something. That's exactly right. Yeah. That how has that been in your experience? Have you seen a lot more of uh that that kind of technology first approach or or in the world?
Steve Labkoff: No, well, no, people could people seem to be at the moment, from what what I'm what I'm observing is they they are treating this as something super novel, super uh new, and they're not using the same type of critical thinking that has typically accompanied every other type of new technological innovation that's happened over the last couple of 30 years. So you and I are in the same age group. We both cut our teeth in the late 90s or maybe the mid-90s as we got into this space. And you'll remember, you know, when the internet started, the internet was another exactly the same kind of uh of a hype cycle, same kind of like you can do everything, it's gonna change the way we're doing the entire world. But it wasn't until the mid-2000s, maybe very, very yeah, like 2004 or five-ish, when Amazon, eBay, and all these other big companies really found their footing. And we figured out that in 1993 or 94, 95, when it was Netscape and it was just so cool, nobody knew exactly what to do until they realized, oh, we can sell things on the internet, we can search for things on the internet, we can get data from the. And I think we're in the same kind of a, we're only three years into this whole journey with AI. And I think we still don't yet know where things are gonna settle. I mean, people have been talking about AI in, you know, in how to practice medicine and clinical decision support and how to ensure that patient, you know, we're gonna change medicine. Well, yeah, we are gonna change medicine, but it's gonna change probably not at the pace that people are hyping it about. And that was what one of the things that came out of our work at the at the Beth Israel is that there are some really interesting things happening. Like we've had people talk about uh everything from diabetic retinopathy screening and and pathology and things like that. And yeah, AI can do, in some cases, a better job than humans and diagnosing these things and helping doctors think better. But I'm not clear if that's where the real wins are going to settle out. One of the things we've seen also is that the mundane use cases, use cases for things that are really like really hanging up humans, pushing paper, for example. Yeah. Um, I was just chiding with a friend of mine. I run my consultancy and I don't do my own billing. So my clients love that because I don't bill them. And then and then, you know, I have to sit down and spend an hour or two doing my billing. And I it's a task that I just loathe doing. It's just a painful thing, even though I need it to get paid. But if I could figure out an AI agent that will help me with something as mundane as getting my bills out the door on time at the beginning of the month, that would be a win, right? And it'd be a great win, like if it helps diagnose rare disease or if it helps us generate new drugs. All these things are really, really cool use cases, but they're not, I think they're those use cases are going to be, they're not gonna be the ones that are gonna really win the day, so to speak. It's not the ones that are gonna win the hearts and minds of the of the masses, because the masses don't know anything about alpha fold, they don't know anything about how to design a drug. They know that balancing their checkbook is a painful thing to do every month. And what do we do?
Chris Hutchins: Yes, yeah. I think the the interesting opportunities for me, and you know, I'm sure you have some some thoughts around this and probably your involvement, some things in this space too. But the potential research is the part that I get excited about, if nothing more than the fact that having capability to process information uh at a scale that we've never been able to at a time frame at a pace that we've never been able to, that really kind of excites me. But the data hygiene and data quality stuff, we've got to get on that stuff. And we've been talking how long we've been talking about interoperability and we really not achieved any level of it that really moves the needle and relieves a lot of the burden for a clinician that has been routinely had things added to their list of data points they've got to capture. Conversations you have to have about certain things so it's documented and can be built. I mean, all that stuff. We just kind of went in the wrong direction. But we've got it if we've got to get serious. We don't have a lot of time, I don't think.
Steve Labkoff: I I you know, I don't know about how much time we have, but I do think that you're not wrong and that we need to have be thinking about this a little differently than we may have thought of in the past. We need to be, or maybe stated slightly differently, we need to think about it with the same framework as things that we've done in the past. Because at the end of the day, if you think about it like a standard information technology play or challenge, you you have there there are rules and and ways of working that have been tried and true for 40 years about how to bring a new technology to bear. And I think right now most of the world is kind of ignoring that because they're so enamored with all the really cool things, all the new video clips that can be made, all the artificial photographs that can be done. And all those things are really cool and really interesting and you know, to some degree a little concerning when you have, you know, artificial people talking and they look like President Trump or President Obama. Those things are highly concerning. But if we take a step back and look at it with the same lens that we use for looking at any new physical technology that we bring into our lives, it's gonna look a lot different. So initially, people are gonna start using it for the things that they know how to do and will take over for the thing for those things, but that's not gonna be the real win. The real win is gonna be when you're thinking completely out of the box and you come up with new use cases that is very specifically earmarked for what AI can literally do differently than other than like do your checkbook differently, or the same as you do it now. You need to come and I think that's where time will settle out and these use cases are gonna come, you know, start to bubbling to the surface. And we're seeing that. And the podcast, you mentioned I used this podcast. So practical AI and healthcare is all about looking for exactly that. What are the what are the bright spots in the healthcare ecosystem, whether it's in patient care or nursing or to engage patients or life sciences? Where are the bright spots where these things are being found and what are those bright spots? And that's one of the core missions of why we do that podcast.
Chris Hutchins: I would love to like just rewind just a little and just kind of talk a little bit about what inspired you and kind of led you into this. Because you mentioned earlier, you know, you didn't set out to be dealing with data and AI at the in that's you know, as a clinician, it's not like you didn't have a lot of other things you could do.
Steve Labkoff: So if you rewind my career, I started as a as a an internal medicine doctor. I trained at the University of Pittsburgh, which was an amazing place to get trained. It was kind of the best of both worlds. It had both inner city components and a lot of suburban components. So you'd see some of the really nasty things that you need to see when you're in training because of the downtown kind of parts of it. And then you'd see a lot of uh high-end, like we had sheiks coming into Pittsburgh from the Middle East who needed liver transplants and and their children and their family. And we would see these high-end patients and the very low-end patients. So I got my training in that environment, which was an amazing place to train. But while I was there, I got hooked up with um the division of informatics that was run by a guy named Randy Miller at the time. And I got hooked into doing some work, and it was, I'm gonna call it peon work, because they were working on an artificial intelligence system called the quick medical reference or QMR. And I was one of these residents that got hired to do a lot of research to create these medic, these um this they call them disease profiles. And they was looking for the weights. How do you come up with a profile that was using a deterministic model to determine whether or not the patient had this given disease? And I got to meet the department and I got to meet all these people in the in the field. And eventually, after I did my cardiology work, I decided to go do a fellowship in informatics, left cardiology and went to Harvard and MIT to do a clinical informatics fellowship at Beth at Brigham Women's Hospital in MIT. It was a combined fellowship between the two programs. Spent some time up in Boston and then left Boston after the fellowship in faculty was done and went to life sciences. I got hired by Pfizer, and for 13 amazing years, I got to do things that were not what a typical doctor in pharmaceuticals gets to do. Like I was not, it's called a medical science liaison or an MSL. I wasn't working on the clinical trial space. I was working on the cutting edge of healthcare technology and how it was affecting the life sciences industry. And that was an amazing thing to do. I mean, it led me into programs and things that were that I haven't been able to rival since. Like I got to build a hospital in Africa in Kampala, Uganda. So for four years, I was going back and forth to Kampala and orchestrating all the healthcare IT infrastructure for this hospital. And by the way, if you asked me what I knew about building hospitals when I started, I didn't have the slightest clue, let alone doing it in sub-Saharan Africa. But over the course of the time I was there, we built out the infrastructure. We treated it like another technology program with different constraints. I raised almost two and a half million dollars in funding and donations in kind for places like Compaq Computer and Hewlett Packard and IBM and SAS, the statistics company. And we got licenses and we got hardware. We implemented all this stuff at the organization is called the Infectious Disease Institute. And recently interviewed somebody for the podcast who's working there. It's still there, it's still up and running 20 years later. After I was done with uh Pfizer and all these interesting things, well, actually, just before I left, I started a department there called Healthcare Informatics, which was looking at the policy implications of what uh healthcare informatics was doing to the industry. And I actually was almost like a lobbyist. I was in Washington twice a week, briefing Senates and House staffers on e-prescribing and standards and other things, and started that group up back in those days. Fast forward, I ended up doing another stint at AstraZeneca on Real World Evidence. I started three departments there in Real World Evidence and biomarkers and clinical trial design. A few years later, uh I ended up as the chief data officer at the Multiple Myeloma Research Foundation, building up one of the biggest medical registries ever stood up uh to study myeloma, the cancer the blood cancer, the bone cancer. And then most recently, as I said earlier, I was the VP of Analytics for clinical operations and medical affairs at Bristol Meyer Squibb until uh they went through a major downsizing, 10% cuts, they missed a bunch of trials, and I struck out and started doing my uh my consulting through Luminic Consulting, which is my current firm.
Chris Hutchins: That that is um amazing. I I I don't think I've ever heard anybody rattle off all the anywhere near the number of things that you did that just not typical for a physician to be doing. Extraordinary. I I it's nuts. I hear the passion that you have. There's no mistaking that. But I I want to try to get into what you're doing now, but maybe talk a little bit about what is it now that's driving you and keeping you so engaged. I mean, it it's it's gotta be really demanding. And by the way, I mean, I I happen to know from having our previous conversation that that you also raised a family during all this uh activity that you were doing. Amazing, amazing to me. Uh but please just just tell me what are some of the things that really get you excited that are motivating you with the things that you're working on now, and maybe start to talk a little bit about what some of those things are.
Steve Labkoff: Well, every time there's been an interesting new novel thing in healthcare, I I've kind of gotten involved at the beginning of it. And when I started my fellowship at Harvard, it was handheld computers. And I did one, I did the very first major study of how they use handheld computers in clinical care back in 1993, 94, 95. And that got published. It was I did it in concert with Apple. It was using Apple Newtons. Uh, we spun out a small business called Skyscape Computing. Anyway, fast forward, like what's getting me up to date now is like again, we're at another inflection point with some really interesting innovations to medicine. And when I saw, you know, I knew about ChatGPH in in the fall of 22, but when I went to him's in in the spring of like March of 23, and I saw like how crazed the world got in such a short period of time, I was like, wow, this is a game changer. I got to get myself involved in this. I want to be an expert in this. I want to get, I want to just get become completely in the weeds with all this stuff. And I did. And I and I had actually already had my appointment at the Beth Israel. I convinced them that this was the most important new topic in healthcare informatics that has happened in 25 years. I convinced them to run these conferences. We've published one, two, almost six papers now on the topic. We try to influence policy at the highest levels through this. And what gets me excited is like being here and learning and continuing to try to have impact and to sort of help, in some ways, try to steer the ship, not the ship so much, but at least, you know, call out where there's issues that people aren't paying attention to, like the healthcare literacy. The other thing we called out was, you know, this other issue of we've we published four papers out of our first conference, and we looked at hell at AI governance in healthcare, and we looked at how it's affecting patients and how it's affecting uh real world evidence. And I I was the lead author on one on clinical decision support. These are all issues which are they're the earliest places where people are planting flags and trying to get some value out of uh out of AI. And we want to make sure, I want to make sure that it's safe for patients, that it's providing credible, trustworthy information, and that the world in general is not gonna go down weird rabbit holes on things because of hallucination, but that we actually are looking at it with a critical eye. And I'm actually, you know, that critical eye piece is quite important because I'm afraid what we're gonna see is like the dumbing down of medicine. And I say that with a lot of concern, to be honest. Like, you want to what's my biggest concern? That's one of the biggest concerns that I have is that medicine will, it's already getting dumbed down in some respects, and it's gonna get a lot worse because people are gonna become overly dependent and overtrusting of what's they're being told. Maybe or maybe not without good reason, because you know, Claude and ChatGPT are just spectacularly good at what they're doing now, and it's only been three years. This is that means it's the worst it's ever gonna be in our lifetime, is now it's only gonna get better.
Chris Hutchins: Yeah, that's the piece that's you know a little worrisome to me. I mean, I know you you work with this technology yourself, so the people that are not uh as involved with it are using it a whole lot, they're not seeing some of the flaws that that you and I are kind of just running into on a daily basis with it because it just it drifts the light of questioning and the probing that you do actually affects it. It just continues to it it can spin way out of alignment for the things that you're trying to do with it. Yeah. And it's so complimentary. I mean, I think uh it thinks I'm a really super nice guy. I'm sure it loves you too.
Steve Labkoff: Oh, I've named my chat GPT voice, uh call her Zora in uh in homage to the uh computer on Star Trek Discovery. And you know, it's funny, there's people use these tools in a lot of different ways. And I tend to use I don't generally use it like a search engine. I um sometimes I do, but what I try to do is I I'd like to see the world through a systems approach. And I try to help get it to help me connect the dots, to try to see the things I'm not seeing in terms of, let's say, three or four big things that are happening concurrently, and I try to figure out how to, you know, do they connect? If they do connect, what are those connections? Are those connections something that could be influenced? Are they things that we should be concerned about or should we be double downing, you know, doubling down on them? You know, and and I find it to be a really helpful thought partner. But that also means that it I don't have to necessarily trust what it's telling me. I actually I'm using my own critical thinking and it's working as a as a helper with that. My concern is that you ask it things and it gives you interesting answers that sound really credible and that you just take it hook line and sinker and don't don't test it. Yeah.
Chris Hutchins: Yeah. I I can't even tell you how many times I tried it to trade it not to do certain things, and the very next thing it does is exactly what it just agreed it wouldn't do. Yeah, of course. It's like dealing with a child in some ways. Well, and by the way, stop doing certain behaviors when you can you can condition them better than you condition AI, I think.
Steve Labkoff: Well, I was working on a I have a client that was coming up with a uh deck to to create for them, and I've you know started using Notebook LM to do uh nice graphics. And writing slide decks is a one of those, it's like balancing your checkbook. It's one thing I really can't stand in life. And notebook LM does a nice job. So I was working on it this morning. I gave it a really good prompt and I gave it all the the details I wanted it to do, and it came back and it was like 95% right. And it said, okay, edit the slides, and I gave it edits on each slide. And what did it do? It messed up slides. There was nothing wrong with them. So I ended up having to like download three or four versions of what I was working on, then picking and choosing the slides that were right to get the final. Product, it didn't hone it to a way to just continually get the one work product better. It kind of went down weird rabbit holes and like it completely took labels off of graphs and uh and things like that that made it and like why? Why would you do that? It's I didn't even give you the instruction to change it. Like, I there was a couple, you know, garbage text in there, and I said, fix the garbage, and and it fixed that, but it messed up three other things.
Chris Hutchins: So it's almost like you're just you were sitting here with me till the wee hours of this morning because I was I was wrestling and fighting with with Claude trying to get something you know straightened out from a formatting standpoint. I was having it compared to a brand that was actually, you know, already structured in one of my websites. It just decided that something else was more on brand than it just manufactured something completely different. Yeah, my career. I never prompted it to do any of that. And yeah, so if if people are afraid it's gonna take their job, just pay attention just a little bit and start learning how to use it, you'll feel better about it.
Steve Labkoff: Well, that's like that's and that's the literacy piece, right? Like if people understand, number one, it's not gonna replace your job. In fact, it's gonna make your job different. You're gonna have to be using critical thinking with AI a whole lot more than people are realizing. It's not gonna be the thing that's gonna look in certain places where it's doing, and people I think confuse AI with machine learning. So machine learning techniques that are used in artificial vision and like radiology and pathology, that's a whole that's a different beast, right? And it's been demonstrated and proven that those machine learning techniques are good, in fact, can be better than humans in terms of recognizing tumors and recognizing pathological disease. That's different than working with a large language model, and people are just bundling it all together and assuming it's all one thing. It's not one thing. It's like, you know, it's the same with internet back in the 90s and early 2000s. You know, you've got websites that can just be brochure wear, you've got websites that can be search engines, you've got websites that can be e-commerce things, and each one is different, each one is a different reason for existing, and you need to look at each one with a different lens to figure that all out. I mean, I keep using this lens thing. I you one of the things you didn't mention about me is that I'm actually a professional photographer in the background, and uh I do you don't have enough to do. Well, you know, when my kids left home and I used to be their soccer coach and I would do all those things with them, uh I needed something to do with the with the spare time I had from not coaching Little League soccer, right? So uh I got into photography in a big way. And uh I've can I compete at the national and international level and have won many awards at those levels. And uh my favorite topic though is is astrophotography. I like to shoot pictures of galaxies and nebulas and comets and things like that. Uh that's that's fascinating.
Chris Hutchins: When you're dealing with uh with with clients now, I mean, uh obviously there's some some bigger challenges that you see, but I wonder if there's some some things that you're kind of excited about because you see organizations starting to get some things right and and moving into some much more meaningful of uh applications of the So one of the clients that I'm I haven't signed yet, I'm not gonna say who they are, but they this is an organization I I've known for many years.
Steve Labkoff: And I look at them as a they're an academic institution, and they are finally it's finally dawning on them that they can use AI, that their business is going to rely on AI. So they get government, they're very much government funded, but they realize they can't continue with that. The last couple years of the current administration has seen to it that relying on government grants is not going to be a long-term sustainable way of thinking. And they see that, and they're trying to bring their organization around into being a lot more focused on the data. I've been a proponent in in life sciences that life science companies have for generations believed that their work product, the final product that they have, is a pill or injection or some kind of therapy. And I look at it with a different lens. I look at it and say, you absolutely never send the FDA a pill. What do you send the FDA? You send the FDA a hard drive full of data. You send them information about the pill, you send them information about safety, you send them information about clinical trial results, and what you send them is data. So to me, treating data as a product, it's like it's the most logical thing that you can imagine. So what I've been seeing lately is that some companies are starting to understand that in order for them to survive into the next call it 15 to 20 years, if they haven't been using their data or thinking about their data in that framework, that they are gonna be like the locomotive engine in 1940. That is precisely what's gonna happen. They will be, they will be the out the the metaphorical locomotive when jet engines have come around and jets are starting to arrive. They're gonna keep doing their business as if you know it's gonna take you know 27 hours to go from New York to Chicago on a on a on the Zephyr when it takes an hour and a half to do it on a jet. That entire game change has hasn't sunk in yet. And those that are seeing the data is is that jet engine, or at least the jet fuel, that they're the ones, if they realize it, they're the ones who will succeed. There are companies like there's one out there called Tempest. Tempus is a biologics company that was started by a guy named Eric Lefkowski. And Tempus is a firm that has not just double down, but like quadruple down or exponentially double down on the value of data. And in this case, it's genomic data and proteomic data. And they are one of the leading organizations out there that that produces high-level, high-quality data in that domain. And that data is the fuel that will help us to find the answers to new diseases, find us new drug targets, find us new therapies. And it's companies like them and like Foundation Medicine and and uh some of the other bigger, bigger firms, but the data is the product. And largely from my experience, big pharma has not gotten that yet. Big pharma doesn't they realize that data is important, but they don't treat it with the same degree of uh reverence as they do uh the pill or you know the actual pill. And until they do, what's gonna happen is there's gonna be upstart companies that are gonna start to pop up and they will treat the data in the way I'm describing. And those companies are gonna find, you know, seven to 12x, the number of new targets. They're gonna find that they can find patients more easily for their trials. They're gonna find that their inclusion exclusion criteria are gonna be honed better because when they think about it with uh an eye towards the data and getting the data part of it right, all of a sudden it changes everything in terms of your thinking about how do you generate and go from A to B for the development of a new drug or new compound.
Chris Hutchins: This is an interesting space to, you know, to be just because organizations have been, for the longest time, very, very comfortable to spend money on technology. Yeah. A little bit shortchanging on the data curation and making sure that the data is in a usable state. And I think the rapid growth of organizations where they're onboarding new healthcare companies, whether it's hospitals or clinics or whatever, uh, there's unfortunately not a whole lot of funds left over to do the stuff that people like you and I have to clean up when it's it's really the data.
Steve Labkoff: Even the way you just said that people like you and I have to clean up, right? That makes us sound like we're janitors. And at the end of the day, we're not janitors. We are, I think of us like data chemists. We're the ones who when we we figure out how to purify a like when you purify a compound in chemistry, you're trying to take away all the impurities. You're trying to get down to like the crystalline truth of what the molecule that you're trying to synthesize might be. Similarly, with data, you want to use the same approach. You want to get it down to a crystalline approach to your data, make sure it's clean, it's it's there's nothing that's missing. Make sure that it is it's truthful. I mean, there's the the the Evan Venies of of big data, but people use that as a catch, as a catchphrase. But what they don't do is realize that the reason that catchphrase exists is because this is important stuff. And the data is the jet fuel for the jet engine, which is the AI model. And if you don't have good data, you're gonna get crappy responses out of your model.
Chris Hutchins: This kind of uh goes right into I'm just I I suspect one of the things that really caused you to come up with a name for your consulting firm. You know, ultimately what you're d dealing with in your podcast, there's so much volume of data that's being generated. It's just you know, compounding on them. I don't even know how fast it it's doubling in size. I mean, there was a period of time every 24 months, all the data in the Q and the history was doubling. I don't I I have no idea what that pace is now, but it's probably exponentially faster than that. I'd love to hear how you know, really, what's behind the name, you know, Luminate Consulting, and then maybe to kind of talk about you know how you're dealing with all of this volume of data. And I it's interesting because you used some terms in your your uh podcast that it that I'm partial to. It's really about signals and noise. We talk about that side all day. Still your function.
Steve Labkoff: Okay, as I mentioned earlier, one of my hobbies is astrophotography. Astrophotography, it turns out, when you go to try to take a picture of a nebula, you look up in the night sky. Do you ever see a nebula? Have you ever seen a big red blotch in the night sky? Probably not, right? You never do. But I promise you they're there. If you're in the summertime sky, you can get a glimmer of it when you see in a very dark place, you see the Milky Way. But it turns out the night sky is loaded, and I mean loaded with things that you can see if you know how to look for them. You have to separate the signal through the noise. And I've spent the last seven or eight years methodically imaging almost every major nebula, every major galaxy in the northern hemisphere from Westport, Connecticut, which is there's an observatory there, and I'm a member there, and I get to use their gear. Luminant comes in because our icon is that of a star. And the star, you know, I want to be able to see the stars. I want to be able to see how it all comes away from the background. And what we're also doing with Luminate is we're we're also forming constellations. So all the different projects that projects we take on, I don't take them on because I needed, I need to take them on. Like I'm at a point in my life where I'm doing this because I really have passion for it, as you probably can tell. And I take it on and I feel like it's a constellation of different pieces. And you know, as you put the stars together and you can see different patterns, that becomes then your navigate your navigation points. It gives you a way to navigate through the pathway of what's out there. So the that's why the luminance icon is that of a of a blazing star. It's funny. I told my wife just this morning, I think it's Betelgeuse. I think it because it kind of looks orange like Betelgeuse. Betelgeuse is in the upper, in the upper uh left uh side of Orion. It's a very bright star. And it's orange. We'll we'll say that my logo is Betelgeuse.
Chris Hutchins: I love it. That uh clearly kind of ties in because I mean, you know, I think I've seen this on some of your profiles too. Finding the signal through the noise, uh getting getting the right things, like amplifying a bit of I think something that was said to me one time it was by uh chief data analytics officer. I think it was uh UPMC, but I think it was, but you talked about the number one role in her or was actually a journalist, which I thought was interesting. And she made a really good point with that. And it's just not all the things that you know are going to be showing up in the Wall Street Journal. It's the things that are going to inform decisions that have to be made. And that's how what how CEOs and executives actually use that kind of data. So it's there's an art to knowing what's important and what's not. I'd love to hear a little bit what you know how you're how you're thinking about things with your and how you're choosing your the direction you're going, not only when you're you're consulting for, but also in your podcast. I mean, obviously you're you're talking to some really influential people that are doing things for a re wide variety of uh the spaces in but all connected to the things that you're so passionate about.
Steve Labkoff: So, you know, because I've been in informatics now for Jesus, since 1993 officially, but I was kind of doing it in medical school too, uh, I've acquired quite a Rolodex of contacts along the way. And many of these contacts in the informatics world have filtered themselves into businesses, some of them have filtered themselves into academia, some of them are in uh PE firms or venture capital firms, they're all over the place. And the podcast is sort of based on trying to understand which of those souls are doing the most important work. And we've had, to your point, we've had some very good fortune. We've had some very high-level executives who have actually sought us out. We've recently had a senior executive named Matt Trupo from Sanofi, uh, who came and did a two-part episode with us talking about what Sanofi's doing with AI. And for those of you who are interested, this is one of our best episodes. He was so open and so informative. It was like I couldn't believe a pharma company would be that willing to open the kimono and talk in detail about what they're doing. And he was, and they did, he did, and it's awesome. But we're also getting other folks, we're getting medical leaders. We had Bob Wachter, who's the chief of medicine at UCSF. It's a thousand-person internal medicine department, one of the biggest in the country. He got on our podcast. We have a member of the federal government who'll be on in a few weeks, named Jeff Smith, who's in the Office of the National Coordinator, and he'll be talking about U.S. policy around AI and how it's shaping up in Washington, how it'll affect everybody going forward and the implications of things that are happening from the 21st Century Cure Cures Act and things like that. We choose our guests to try to find those who have the highest impact, those who have the best stories to tell, and frankly, those who tell good stories. It's not just enough to have somebody who's a good uh has a good thing that they've done, but they need to be able to express themselves in a way that is compelling. We're gonna get Zach Kahani in a few weeks. Uh Zach is the editor-in-chief of New England Journal of Medicine AI, and he'll be on the podcast. We're gonna record him at the end of May. And, you know, I'm really fortunate that, you know, because of my connections over the years, that when I knock on somebody's door, they know me, they know my reputation, and they're and it's not just me, by the way. I have a partner, his name is Leon Rosenblitz. And, you know, Leon is like my my right arm on this project. He's an amazing guy to help with. And I hope that you bring him on your podcast. He's just a spectacular partner and uh and a good friend. But between the two of us, we've been seeking out sort of the cream and sort of what we want to get that signal from the podcast out and make sure that we're getting the best of the best on our discussions.
Chris Hutchins: Yeah, I love that. And you know, uh you know that's better. That was one of the reasons I wanted to have you on my shows. You're a voice that needs to be heard. And it it's really important that people are are hearing from experts that are not hobbyists, that really they're doing their doing the research, they're doing the work to make sure that we're we're doing we're taking things the right direction. Curious, all the different conversations you've been having with food with your guests and your own observations. Are there patterns or warning signs that you wish people would look at and pay attention to that maybe they're not?
Steve Labkoff: I I think one of the warning signs that I'm seeing or that I'm concerned about is number one, and we mentioned it, the lack of AI literacy. Where people are not understanding how to use these tools in a in a way that is intelligent. Uh it's not it's not a new interface for Google. It's not a it's not a new Wik version of Wikipedia that just has the conversation of all known information in one place. These are tools that can connect dots in ways that people just simply don't get just yet. I know I have friends, very close friends, who you know are trying to use it to help them diagnose some diseases that they're f they're dealing with. And the problem I have with that is that it can do that. But this here's one of my biggest, really biggest concerns of all, actually, is that there's been a trend in the last call it 10 years to quote unquote democratize data, democratize information systems. And what that means, and it I had a very hard time understanding it when I first heard the term, what it meant to democratize data. And finally I I understand it now. And what it what I believe it means is that you give lay people access to information so they can do their own thinking, they can do their own work, and they can draw their own conclusion. They so they're not dependent on experts, they can do it themselves. And therein lies one of my biggest, biggest, biggest concerns. While I know there are people out there who are citizen scientists, I know there are people out there who have trained in other domains and can do this well. What I'm really afraid of is that people who don't have that training or don't have the context work with it and draw different conclusions from it that it that might even be completely 180 degrees wrong. So I'll give you an example of I don't a friend of mine recently was diagnosed with a rare disease, and he had some lab work done and he called me on the phone and said, Hey, I know what I have. I have celiac disease. I said, What do you mean? Well, here's the blood work. And I'm like, Oh, when I looked at the blood work with them, I'm a physician, right? I've been trained, I've seen patients for 10 years. I looked at, I said, What makes you think this is showing that you have celiac disease? Because all the labs were negative. He's oh, no, no, no, look here. And like, no, no, no, you're misreading it. It doesn't say what you think it says. And it actually did support something that they had, something else that they had, but all that was was confirmatory that they had the other problem. So my point here is that if they look at data without that critical context, whether it's data that they get from lab core when they get there or they look at their medical records, if they don't have this clinical context, they have the risk of making really, really bad life decisions. And I realize it sounds a little snobby for me as a expert to say you need the experts, but honest to God, I really do think you need the experts. And I don't think that the general lay public has the training or understanding. And I don't know that ChatGPT or Claude can take the place of that. Maybe it can, maybe it will. I don't know that it can today, but even experts who use these tools who use these tools today find there's flaws in there and they're so convincing that they're correct. These tools come at you and they they stroke your ego. They tell you, oh, that's a great idea. Oh, you're thinking about this in exactly the right way. And unless you think about it and use these tools with a very critical eye, you can be led down a garden path and find yourself in a in a world of hurt. And that's my big one of my biggest concerns here.
Chris Hutchins: I I can tell you from my own experiences that it's been very, very difficult to push back uh the whole concept of democratizing the data, particularly in when I was working at some some health systems, because people had this great technology. They could move data quickly, they could do classification and all this other stuff, and they just wanted to move uh at Mach 10. Oh my gosh, we don't even let our own people in my data team produce something that doesn't get passed by somebody with a clinical eye. If it's clinical data, it requires some expertise that's beyond just being a technician who happens to be able to read. Um, you know, I'm not gonna be the guy to step in to answer clinical questions. No, I've been in healthcare 30 years, but there's no honorary MD. There's no such thing that has any meaning whatsoever. You can't have the context. And I think this is where it bugs me to to what you were saying about how people are using the you know, the CLOD or or whatever Chat GPT is they won't even know what's relevant in terms of context. Exactly. Things that you would know. I mean, just being able to look at a patient sitting in front of you that has some body language is it's telling you something that they're not saying.
Steve Labkoff: That's exactly right. And and and if you and this is another thing, you know, when I started my role at BMS, I asked my team, I had 50 people on my team, I asked them, how many of you have actually set foot in a clinical trial? Have you ever been in and watched or participated in a clinical trial? Because this group's job was to interpret data from clinical trials. Now, I have come, you know, my informatics training is stemmed from having 10 years of clinical experience on top of, you know, the next 30 plus years in in informatics. So I can look at things with a lens with a lens that sees both sides of that coin. And if you don't have that lens and you just look at the data without a clinical perspective, you're gonna miss stuff. And this is the democratization problem that I see that if you don't have and and like when I found out that only three people out of 50 actually had ever been involved with a clinical trial or been watched like blood drawn for a clinical trial, or been in a place where the patients were being interviewed by a trial manager, like no one had. I don't know how you do a job like be a data analyst without that kind of context. I don't have no idea. I set up a program where I sent those folks out to go get that context. I felt it was that important. And and I think that other organizations should think about that as well. That clinical context. So Beth Israel runs a program for uh at the Division of Clinical Informatics, a fellowship for executives who are actually already doing their day jobs who want to learn about the clinical side of things. It's a it's a I think it's a three-month kind of program. It's not cheap, but it helps get executives to get that kind of clinical context. And it's something, if you're interested, you can contact Yuri Cantana at the Division of Clinical Informatics, and he has slots. There are not that many, but there are slots, and you can get that. It's not a substitution for your MD, it's not a substitution for your RN or your vet degree or your dental degree, but it gives you at least. The taste where most of these folks never had that before.
Chris Hutchins: Yeah. It's just it this is just too important. We've got to make sure that we're we're getting it right. And I you know good intentions are are are fine, but you'd be you need qualified people who are making calls about critical decision making, usefulness of the research, the validity of it. You can't we can't get this wrong. We we we just can't.
Steve Labkoff: Oh, but we are. We we're gonna get it wrong. Let's you know, don't say we can't because we will get it wrong. We will get, you know, the old adage from medical school is that 50% of the things they teach you in medical school are going to be proven to be wrong. Problem is when you're learning it, you don't know which 50% is wrong. So it's inevitable we're gonna get it wrong. We have to have enough tolerance and enough wisdom to know number one, it's probably okay to get some of it wrong. And it's essential to get some because you only learn when you get something wrong, right? You don't learn much if you get everything right. So if you get it, if something goes sideways, when the when the space shuttle exploded, that probably provided an enormous amount of learnings for NASA so that you know they would never launch again during uh an i you know an icy day, so the o-rings wouldn't uh wouldn't expand and cause problems.
Chris Hutchins: We're talking about something that that's so critical. And it's it the it's it's the we're talking about an evolving science, and we're talking about the practice of medicine, and it's such hugely important. Even trained clinicians with 30 years of experience, there's still an evolving science, and there's learnings, there's more more information, more insight coming constantly. And sadly, we're also discovering new diseases because I think we're probably doing things that are causing our problems too. But that's a whole other issue. But the the the str the important thing from where I'm sitting is it's voices like yours who need to be heard. They need to be out in front and being the ones that are being you know listened to and are making, we have to make sure that organizations are hearing this and they're reaching out to somebody like you to understand where they actually are in their journey and are they prepared? Do they have the right pieces in place and they have the right programs for literacy? Uh all the things that you've been talking about. Uh I I think it's just hugely important. Well, the biggest swing and miss we could have is if we don't pay attention to your, which is like yours or now.
Steve Labkoff: Well, not to not to be too blatant in being self-serving, but if people want to have a longer conversation or want to engage Luminant in those kinds of programs or projects, we are selective in what we take. We don't take everything that comes down the pike, but you could reach out to me. It's the email is simple. It's Steve at luminantconsulting.com, and Luminant is L-U-M-I-N-A-N-T, luminantconsulting.com, and send me an email and we'd have a conversation. The sweet spot for us is looking at the at the overall strategy and how to implement it. Uh, we're not a data science shop, we're not going to do projects that revolve around, you know, analyzing the data, although we can, that's not where we find our sweet spot. Our sweet spot is helping organizations really put together programs that will breed sustainability and cut through the noise and get right down to pure signal.
Chris Hutchins: That's fantastic. And for for the listeners, uh you you'll you'll find a great amount of detail in in the show notes, how to get in touch with with Dr. Lakoff. Dr. Lakkoff, this has been amazing for me. I I I can't tell you. I I hope the listeners uh can appreciate that the the the way that it was for me. I mean I I learned a lot. You're you're you've had a very impressive career. And there's so much uh great work ahead of you. You've got the passion to do it, and I can't wait to stay in touch and and find ways that I could be supportive of what you're doing. And and I I just know there's a lot of great stuff coming, and I can't thank you enough uh for coming on the show. This has been amazing. Thank you.
Steve Labkoff: Oh, you're quite welcome, and I really appreciate having the time to do this with you, and I hope that it's helpful to folks out there. And by the way, if you do find it helpful, pay it forward. You know, I that's always my motto. Pay it forward, help folks who are earlier in their careers and help them leg up because you know, we're not always going to be here. We need to make sure the next generation is getting you know tuned up to do this thoughtfully. So pay it forward.
Chris Hutchins: We we get along well. I think we're we're we're thinking the same way at this point, where we've got this horizon, we want to make sure we're investing in the next generation and how they make sure that they get they're they're falling in love with the the things that we've been involved with so that they'll carry it and we hand it to the generation after them. Absolutely.
Steve Labkoff: Well, thank you so much for having me. I really appreciate the opportunity.
Chris Hutchins: That's it for this episode of the Signal Room. If today's conversation sparks something in you, an idea, a challenge, or a perspective worth amplifying, I'd love to hear from you. Message me on LinkedIn or visit signalroompodcast.com to explore being a guest on an upcoming episode.