Larry Dignan, Editor in Chief of Constellation Insights, sits down with Matt Lewis, Global Chief Artificial Augmented Intelligence Officer at New Medical, to discuss the intersection of life #sciences, #mentalhealth, and #AI. Lewis highlights the challenges of AI adoption, emphasizing trust, and self-efficacy as critical human factors.
He noted that 80% of AI adoption success depends on these psychological considerations. Lewis also discusses the regulatory hurdles in life sciences and the potential of AI to transform industries, particularly in early disease detection and mental health support. Despite concerns, he expressed optimism about AI's future benefits, particularly in improving health outcomes.
Full Video Transcript: (Disclaimer: this transcript is not edited and may contain errors)
Hi. I'm Larry digner from constellation insights, and we're here with Matt Lewis. He's part of Constellation Research. Is AI 150 Hi Matt. Thanks for joining us. Hey, Larry, thanks so much for having me. It's great, great honor to be here. So you're playing at the intersection of Life Sciences, mental health and AI, so I guess you want to explain your role and kind of how you're viewing things. Yeah, sure. So it's a really interesting time. Generally, it's an interesting time to be alive and to be in the space I'm currently serving as global chief artificial augmented intelligence officer in new medical and I'm also in that in that space, I'm the executive sponsor of new medicals mental health business Employee Resource Group, which is 1000 person large division of folks that kind of put our medical affairs kind of a proposition forward, and the mental health Burg is all the folks that are challenged with mental health and well being issues, either themselves, or, you know, their children, their parents as caregivers and the rest. So my world is kind of like a mix of, how do we leverage emerging technologies like generative artificial intelligence, machine learning, deep learning, NLP, and the rest to speed time to commercialization. But also, how do we support the mental health and well being needs of our colleagues and our counterparts and the people with whom we work so that they can live good lives and enjoy what they're doing? So how do you see generative AI contributing to mental health? And, you know, I mean, my, I mean, there are some looming things, like, you know, the therapist professions, you know, a lot of folks are aging out. You know, there's cost, there's comfort, there's a bunch of things. How do you see generative AI playing in that space?
Yeah, I mean, it is one of these things that I think a lot of people you know recognize that there's a kind of an imperative, technologically, to adopt and to kind of put the AI into their world, whether it's you know, as people you know, I went to a blog party recently, and people you know at the pool are asking, like, how do I use chatgpt to make my life better? So it's like a real thing that actual people ask about, but in, you know, the board room and in the corporate corridors, people ask the same question. They don't really know how to make sense of it. And one of the early meetings that we had when I first stepped into role as chief AI officer, and I've been the Chief Data and Analytics officer for years before that, one of the first questions someone asked me was, you know, are robots going to take my job? That's literally what they ask. And you know, it underlying that question is a question of fear, really and anxiety, which are really mental health concerns that are not really kind of questions of, you know, do they know the technology they understand its merits and its benefits and features, but you know, do they feel safe and secure in their organization to support them in their learning journey as technology is adopted across the enterprise and a lot of people that work in these environments really don't give proper consideration to the kind of psychological or cognitive or affective concerns of knowledge workers in an environment that is rapidly Changing.
They almost kind of think, Oh, well, together. Just Well, the other just kind of talking. They're just saying things like, robots are going to take my job, but they don't really kind of give it the proper consideration. And as it turns out, the research literature on the adoption of artificial intelligence suggests that at least up to 80% of successful adoption in the enterprise is due to what's called human factors. And most of the Human Factors literature is around psychological considerations of adoption, and with what we kind of think about in that space, it's like, really how someone shows up to their role, and their role as kind of a counterpart to AI, and if they don't show up in a way that is kind of intentional. They don't show up in a way that is welcoming. They don't try to collaborate with AI. Lots of bad things happen, but for themselves, as individuals, as professionals, and for the companies with which they work. So really, at the at the core of it, it really is a kind of human factor, kind of mental consideration, that determines whether AI adoption is successful or not. But a lot of people just kind of poo, poo it away and say, oh, you know, they're just talking nonsense. What are, what are some of those human factors? If you were to rank them, like, I know, there's change management, there's culture, there's a bunch of things, but, but I guess, how do you I guess?
What would you rank in terms of, you know, three human factors that, you know, exactly need to think about. Yeah, I think the first one that is that comes to mind that, you know, is talked about a lot, but it has a number of kind of sub components to it is definitely trust. And it's not necessarily just related to artificial intelligence, but to any emerging technology, and honestly, any decision that we as humans make ultimately has trust at the core. I mean, we're not going to, you know, hire a plumber to come in and fix our toilet in our bathroom in our home if we don't trust that they're going to do a good job and they're not going to cause our family or ourselves to be at risk while they're in our home. It's it doesn't matter if it's kind of a basic thing like that, or if we're going to use, you know, a service like chatgpt that takes our data and potentially sends it off to the cloud and sends it back to OpenAI in California. Fundamentally, if humans don't trust the service or the professional that's providing them value, they won't work with them. And there are so many aspects of how trust is kind of moderated or mediated in a relationship, professionally or personally, that if you don't satisfy that nothing, nothing really works. Well, another big consideration in the human factor environment is what's called a locus of control, which is kind of how the individual perceives themselves to be in, like the broader network with which they they work or they live.
So people that tend to kind of consider themselves as part of a broader system or a broader network or a community end up working better, actually, with generative AI than those that consider themselves to be, like a lone wolf, if you will, or like really kind of calling all the shots. And it also, there's a lot of research, very interesting research that suggests that if you remind people of their spiritual and religious obligations immediately before prompting in generative they're better at working with generative AI than if they just go in blind. And the reason why that is is that if humans are kind of reminded of the fact that they're not alone in the world and that there's a connection, either to nature or to earth or to a deity, that they're able to kind of connect with the other being generative and partner directly, whereas if they approach generative without that prompt, no pun intended, they they do so more from a hostile perspective, and they don't try to collaborate. And the outputs suck. They they're much worse than if they go at that from a kind of position of vulnerability, if you will. So logos of control is probably the second biggest thing. And then I'd say the third biggest thing is what might be called in the in the psychological literature, and it's all learning literature, which I did most of my doctoral degree in, is around like self efficacy, which is like the intention to act or like, you know, the confidence that someone has in their ability to actually perform a task. And a lot of people even like very highly technical people, like people that have, you know, medical degrees, PhDs, pharmds, very technical people.
And these are the people with whom I work regularly across both in Israel and in the life sciences ecosystem. They don't really rate their own skills with deep tech, with emerging tech, with general artificial intelligence, with emerging tech, Blockchain, VR, anything like that, very highly. So when they show up into those environments and are asked to take on the role of thought partner to AI, they don't do well, not because they're not capable, but largely because they're not confident. So they're they are competent, but they're not confident, and as a result, their work suffers because they don't believe in themselves. And you know, this shows up not just in people that are highly technical, but like our athletes and our Olympians, like how many times you watch the the Olympics recently, and Simone Biles is talking about the twisties. The twisties is like a a self efficacy thing writ large. People don't believe in what they can do, even though they're super competent. They just don't show up when it matters. So does this mean that you're almost going to have to, I'm just thinking aloud about, you know, how this affects the future of work.
So are you almost like the people that, I guess, thrive and adapt? Are they going to have to have those three qualities and sort of have that trust level and be willing to collaborate? And if so, what that mean? What's that mean for the workforce, especially folks who are, you know, kind of like the lone wolf and what, and have control? Yeah. Yeah, Ithink, you know, there's a lot of things like all happening at the same time, and it's hard to kind of piece it apart, really pull it apart and piece it back together. Part of it is that the world in which we are working now is not really set up for the type of transformative change that's happening as a result of generative AI. So we're almost like, you know, we're working in 2024 but generative AI is like making possible a 2031 type of work. And there are certain kind of structures and processes and systems that would make 2031 work possible, but we just happen not to be living in that environment, like, for example, when when you work with content, any content that originates digitally, it would be preferable to know the origin of that content and to know, for example, like when people view this interaction between you and I, that we're taping it live right now on Friday, September 6, at 11:26am, Eastern and its origin is recorded directly at this point, but most people can't do that. When they watch it directly, they'll only see the asset itself in 2031 almost guaranteed, every piece of media that emerges into the world will have some type of watermark or some type of transparency stamp on it, either from ctpa, which is on the media side, or the equivalent in healthcare and life sciences. And you can derive what's called Providence, and be able to say, Okay, well, the origination of this was actually a prompt that someone made to generative platform and then a human edited it or annotated it or labeled it or curated it, or did something to it, and then later it went back through a generative platform and then emerged into the ecosystem as the final deliverable. And you can see the whole kind of chain, the logic chain, if you will.
But since people don't have that, all they had was what existed in 2014 or 2004 they tend to approach it with a healthy degree of skepticism, and they don't trust it, understandably, but it's like our systems and our processes haven't caught up with our technology yet, but they will eventually, and when they do the the trust level will increase dramatically. And when that happens, you won't really have to ask people to do things so they don't, you know that they're not ready for yet. But for those of us that are deep in on AI and are in this world, we kind of know that that world is coming. It just hasn't appeared yet. For the other two areas, like locus of control and for self efficacy, I think you're going to start seeing true platforms. I don't mean like the consumer platforms like chatgpt and Gemini, but true platforms, AI platforms that are being built right now, mostly in the startup ecosystem, that focus on things like, how do we encourage people in the real world and also in our companies, to start showing up as themselves, as their best selves, in ways that make them able to really contribute fully in the workforce and contribute ways that amplify value for the work they're doing, and hopefully enjoy the work better, because it's not fun to do work where you don't really like, you're not able to contribute the things that you that make you happy, because the things you really enjoy doing are like, kind of like in your personal time, and things you have to do at work or just for work that that's not great, we've all been there. And then, on the self efficacy side, it really, I think a lot of people feel like they can't do what's required of them because they haven't been trained, or they their company hasn't given them the skills that are necessary to kind of contribute in an equitable manner. When that's actually not necessary. At present, it's, you know, I've been in artificial intelligence for 15 years, and there was a time back in the late 2000s early 2000 10s, when you really needed, like a team, a full team of like 20, 3050, 100 PhDs in machine learning, to build a single model and keep it running for a going period of time. It's just not like that. Now, if you want to see value from generative AI, all you really need to do is identify a pressing problem that exists in your life or at work, find a platform or application that can solve against that and then use it long enough to either get really frustrated that it's not working well, or find something else that does work well and then figure out those kind of guardrails as to how to progress it forward. And that's really all it takes.
And if you don't have that type of experience, you really can't contribute in the world that is transformed by generative and I think that's the gap between having confidence to play in this space and not having confidence. And I've seen it firsthand in my teams. I've seen it in client environments. It's really building experience and kind of growing with the technology, if you will, as opposed to kind of running away from it, as that first person mentioned a couple years ago, the robots and the like, and I think we're going to see a lot more of that in the days to come. So in terms of, you know, enterprises, a lot of vendors, they all talk about, you know, the the trust of generative AI and all that, you know, usually they're talking about corporate data, or, you know, things like that, or keeping private. Data secure, but, but really, the whole trust thing needs to be solved before any of this other stuff gets going, correct? Yeah, yeah, for sure. And I think, you know, there, there are a number of kind of aspects to trust that that people either recognize, but don't spend the time to really, kind of really fix, if you will, or that they recognize they're important, but I think they think perhaps, that they're going to be like someone else's problem, like down the road, like, you know, this is going to be an issue that, you know, our kind of predecessors are, you know, kind of will inherit later. You know, that's not, I didn't say it the right way, but you know, though, that will these aren't problems that the current leadership will have to deal with their problems. That'll happen three to five years now, but it's just not true.
There are problems of today like you know, for example, I've when I speak, sometimes when I do keynotes and other conferences. I'll use examples of some of the activist boards that are trying to claim that existing companies today are not actively including generative AI in their plans and their current marketing activities. And when you look at companies like Disney or the large kind of blue chip companies out there that have not been proactive in adopting generative artificial intelligence, they indicate that in some of these types of organizations, they could potentially transform the entire way that they communicate with their customers and turn what essentially is like a very kind of anachronistic model into more of a engagement model with their customers using gender of artificial intelligence. And really what's at issue is not so much the business model, but really how the leadership considers what their business to be, and how customers really trust that organization for the value that they accrue, like whether they come to Disney, for example, for just a theme park, or for a streaming platform, or for for example, like a broader experience that is leveraged on the insights of all the activities that undergird the whole kind of corpus that Disney supports.
And to get to that, like later consideration requires a real shift in how current leadership thinks about what it really is in business to do, and also how they kind of communicate with with all their stakeholders. And the failure to do that in the near term is encouraging a number of startups out in the generative AI ecosystem to try to solve that same problem of generative AI experiences, using content for family audiences, if you will, that can do it on a shoestring budget, and kind of pull those eyeballs away from legacy, you know, enterprises. So it really is a, an actual issue today, not like a an issue that will exist three, five years from now. And, you know, it's, it is fixable and solvable. But it's not fixable and solvable necessarily by just throwing more software across the enterprise gates. It's fixable really by a hard look at kind of who the organization is and who it wants to be, and how it can really speak, you know, vulnerably, about like, where it can create value in the ecosystem and how it can do that, you know, given what it's already done historically, and I think every group, if they're not willing to do that, they're going to have threats externally from organizations and entrepreneurs that are willing to do it themselves. So in terms of Life Sciences, you know, we've talked a lot about, you know, the various challenges with, you know, trust and the psychology of Gen AI and working with it. Is life sciences a harder nut to crack, or is it about on par with other industries?
Yeah, I mean, it isn't harder in the sense that that it's not possible. I think it's harder for the same reasons that all the regulated industries are challenging, because ultimately, when you're interacting with consumers, you have to first pass through the regulatory considerations of, at least in this country, the Food and Drug Administration and its counterparts in Europe and in Asia and other markets. So you know, when you're actually talking about, say, getting a drug or a device or digital therapeutic across to someone that has a health condition, that you can't directly change what you're doing without first getting the say so of a regulatory body, which is different, say, than if you change the flavor of coke and then you want to put it on shelves. It's a lot easier to do that than it is to, you know, to adjust like a drug that your mother is taking just just much harder. That's not to say, though, that a lot of the kind of so called back office or operational aspects of communicating and commercializing novel science aren't already being transformed by gender AI, and they are. And historically, AI is has always had a very strong foundation in life sciences and in healthcare as well, especially in areas like research and development and in post launch marketing and a number of other areas as well that are not as close to the regulatory schema because not being kind of held to those same kind of considerations, there's a lot. That is either friction full, you know, that just doesn't work the way it should, or where there are places to make things more efficient or effective, or to ensure that people doing the work find the work engaging, so they stay around with it long enough to bring a novel intervention to market. So there are a tremendous number of use cases. You know, we've partnered on the nysio side with McKinsey.
I've worked with other consultants as well. There are hundreds of use cases within life sciences that are medical to intervention from a generative AI perspective, the challenge is not finding things to do with Gen AI, you could we could be here all day thinking about things that could be done. The challenge is really aligning those to the priorities of your specific business, both from a resource, time and people and financial perspective, as well as find the people, internally and externally, that are committed to seeing that through and that from a human factor standpoint, actually want to do it and want to see the outcomes of it benefit the organization. Because if they don't want to do it and you just build the thing, build a solution, the platform, and then kind of throw it across the fence, they will actively resist it being done, and it won't benefit the organization. You'll get these results that people always talk about, that 90% of AI projects fail and that that is a true statement, but it's probably 90% of those are human factor driven, and it's not the software or the model. The models are great now they weren't always, but it's largely because the people that are adopting them have no interest in seeing them work, and they do everything possible to sabotage them once they're actually in their world. I mean, it's, it's almost comical, like Gen AI is, you know, this, this new whiz bang technology, and the models are really cool and all that. But end of the day, like any IT project, totally depends on the human factors and whether people are into it or not. Yeah, whether it's data analytics. ERP, pick your pick your acronym, like, yeah, if the troops, if the troops resist it, it's not going to work. Yeah. I mean, there used to be this acronym. I you and I probably are old enough to remember this.
I don't know if everyone viewing this will remember this. But back when I I've had a beard for 26 years, but it didn't always have gray in it. And I used to have all the hair on my head, and it was involved in the top but back in the early, like the late 90s, early 2000s there used to be this acronym in the space called pebcac, the problem exists between the chair and the keyboard. And you know, you'd get, you know, all these issues. People couldn't figure out how to use email, they couldn't run filters or tag messages. And it wasn't there was no problem with like Lotus Notes or with with Outlook. The problem was the person using the software, and it was almost always the person that was the problem. But it couldn't say, like, you know, to the Senior Vice President, the issue was you so they use this pebcac, person exists between the chair and the problem exists between the chair and the keyboard. You know, acronym as as the problem now to be called at Human Factors. That's really where human factors research came from, is, you know, it is the person that's the problem, but you'd rather than blame the person. You need to really think about their motivation or their mental health or their psychology, or their interests, or their training, or, you know, you said, strategic enablement, earlier training, like, you know, the people have a lot of experience and expertise when we ask them to do things. And in life sciences, especially today, and a lot of the economy, it's a very difficult time. Like, we've got just come out of the pandemic, which is a lot of difficulty for a lot of people. A lot of organizations are restructuring, you know, their difficult economic climate. Those two changes alone are more than a lot of people could handle from a change perspective. And then you're throwing generative AI on top of them and saying, Hey, like, the whole way you've done knowledge work, your whole career, is shifting from, you know, you use software to the software talks to you and tells you what you should do. And a lot of people are like, I can't I can't I can't handle that. So it's, it's a realistic, kind of understandable thing that this is, this is why this is but you know, rather than kind of cast blame on the people that are involved, the human factors, your community, is really trying to make sense of why it is that way, and to try to stand up a solution that makes it better, because the generative AI wave is not slowing down. It's just going to continue to wash across our shores. And if we can help people kind of figure out if they need to go grab a surfboard and ride the wave, or, you know, run up for the hills or run away from the wave, I don't know, but, you know, it just kind of telling them that they shouldn't stand there and get hit by the wave is not helpful, right?
All right. Is there anything I didn't ask I should have? Or any final points you want to make? I'll just say that, you know, there is a lot of concern these days about generative AI, and I think that it's, it's definitely appropriate for people to be asking good questions, and, you know, be thoughtful and considerate about what is at risk, and what potentially are dangers and concerns in the space. But I'll also say that I think there's a tremendous opportunity for good as a result of generative AI. And I've honestly never been more excited about our collective future as a result of generative AI than any technology I've worked in the 27 years I've been working in life sciences. There are. More true examples of what generative AI can do to help identify diseases early, to help people that are suffering, improve their actual health today, and even if all the AI research stopped today and we just only had access to the actual models that existed today and nothing ever improved, which won't, won't happen. But if that were to be the case, we could do so much good for humanity just with what's been discovered in the last two years, that it would be a major benefit for society just for what we've already discovered. But that won't happen. What will happen is more likely that the next 235, years, we'll see so much benefit for society, hopefully for human health, for mental health, and for all of us as people, that the balance of the risks and the benefits will kind of even it themselves out, I think, and hopefully we'll start seeing you know why some of us are so passionate about the space. All right? Thanks for joining us. Thank you so much