cc: Life Science
cc: Life Science Podcast
AI from Winter to Spring
0:00
Current time: 0:00 / Total time: -35:48
-35:48

Machines have been helping us with physical work for a long time. Now with AI, machines are helping with the knowledge work. At the outset, AI was working with numbers. Now it’s working with images, words and language. And I’m discovering it’s everywhere with still great potential ahead.

Sidd Bhattacharya, Director of Healthcare Analytics and Artificial Intelligence at PWC gave me a lesson on the history and current state of AI with some thoughts about the future.

The reason why you see a huge explosion or like a buzz around AI now is the fact that the applications have moved into the realm that you and I can understand… language models. People talk, Everybody has a language that we talk in. Now we are having AI help us create that language, fine tune that language. So that's the other thing. That's what makes me more comfortable that, you know, we're not going to go into an AI winter anymore because the applications of AI have become so commonplace. Now everybody is using it without actually knowing about it. So that that's what makes it so cool.

Winter came and went

What is this AI winter Sidd is talking about? The dark side of my brain pictured some Game of Thrones-like drama (haven’t seen it), but Sidd explained that the AI winter refers to a drop in funding when AI was, shall we say going through a bit of a rough patch.

Two groups of academics had (surprise) different approaches to developing artificial intelligence. On one side of the debate were the symbolists. They took a rules based approach to training machines to solve problems. Think of rules like rules for addition e.g., 1+1=2. The rules of the operation determine the outcome: 1-1=0.

On the other side were the connectionists. Their approach was to let the system figure out the rules based on inputs and outputs. In a neural network, it’s more like 1 ? 1=2. What rule (?) is needed to make that work? For a given set of inputs and outputs what do the rules need to be? Eventually with training, the neural net knows what to do with the input and can “show you” the output, which is what we really want, right?

After a period of decent funding, the rules folks’ approach didn’t pan out completely and the money spigot dried up. Eventually though, Geoff Hinton at the University of Toronto proved that a neural net could match or do better than a human at recognizing images. Spring had arrived and the next AI boom was on.

I’m getting less worried about this

Wouldn’t we like to know the rules that a model has decided on? That might make us more comfortable and help us think about potential problems. For now though, the neural network can’t always show you the rules it has come up with. This is the black box or transparency problem, about which I’ve had concerns or at least curiosity. Sidd’s response is we don’t always know the exact mechanism of action of a drug, but we take it because it can deliver the outcome we are looking for. Having said that, Sidd believes that there will be more transparency around the inner workings of AI in the next couple of years based on the volume of research going on. And in a future episode, you’ll hear about an AI technology where the rules are entirely transparent.

What about ethical AI?

Along with transparency, this is a concern people share. Bias in the models can result in discrimination or other undesirable outcomes. Sidd Points out that the AI isn’t biased. Bias is a result of the data used to train the model. In fact, the AI can sometimes reveal bias in the data when the outputs are analyzed. Companies are investing heavily in making sure their data is clean and fair. No one wants a surprise in this regard.

It's a real topic. People are talking about it and making investments on making sure the data sets are clean, making sure they have independent bodies reviewing the data. Making sure that every project that you're doing an AI, people think about the risks and document it from an bias, fairness point of view.

That's also very important, right? Like if you can document it and put it in writing and said, yes, I thought about this risk and here's how I'm going to mitigate it. That'd be helpful.

But looking at the data for bias isn’t always a simple task.

The issue is it's very complex. It's very difficult to understand it by just looking at the dataset. We are getting better at it. So that's the challenge. You'll have to understand the data, work with it, see the outputs and test in the real world before you can launch it in production.

Life sci lagging but catching up

As we heard on a previous episode, life science and healthcare are lagging behind financial services in terms of implementing AI. But they are catching up. We’ll always be more cautious when a patient is involved. Covid has been an accelerator, as it has for everything. There is promise across the value chain from drug discovery, to regulatory (authoring documents?) to clinical trials and commercial. A few episodes further down the queue, we’ll hear some amazing stories of AI in life science and healthcare.

Things like helping with like omni-channel marketing, helping medical call centers, helping with training for sales forces, help augment some of your existing sales force with the right tools, recommendation engines...That would be a huge area of focus in the next few years. So throughout and you know, supply chain, manufacturing.

That's another big area. One cool story. And I worked with that company that, that, that uses AI or computer vision to look at defects on their production line. So anytime there's a vial or there's a pill missing, you have a computer camera that alerts the person saying, “There's something wrong. You might want to come and check.”

Future trends

What does Sidd see for the near term future of artificial intelligence?

…over the next couple of years, there are two key trends I'll call out. One is in the data domain… People talk about data and every time you start talking about AI, cloud, they talk about how data is messy. I see this problem getting solved with the use of advanced or something called synthetic data.

These synthetic data sets can be used to train and run AI models, avoiding altogether issues of patient privacy, HIPAA compliance, etc.

The other trend on the horizon is using AI at scale.

…over the last few years, every company, every life sciences company I've worked with, they have dabbled in AI. Some of them have done more. Some of them have less, but have not, except for maybe a couple, there haven't been a lot of success stories of deploying an AI model, deploying an AI product at scale and getting benefits from it…

People are now at a stage where they feel comfortable. They understand that, that they understand the limitations of the tech. They understand what it can do, what it cannot do, how to manage people around it and get to get over their anxiety. And the next step is to scale it up.

Scaling up requires getting started. In my short time learning about this topic, I keep hearing two rules for implementing AI:

  1. Define the business problem you are trying to solve.

  2. Get started, even if your data isn’t perfect. Because it never will be.

Obviously we should do our best to clean up the data we have. I was still curious about the risk of significant errors even with synthetic datasets. Sidd went back to the analogy of employing an AI model as you would a human being. You wouldn’t turn a new employee loose the first week on the job. AI should be treated exactly the same way. Train, test and supervise until you are confident in its ability to do what you hired it to do.

Connect with Sidd on LinkedIn

Share

Discussion about this podcast