Now I know how Dorothy felt when the Glinda told her that she already had what she needed to get back to Kansas. Let me explain.
For the last few weeks I’ve been collecting articles, videos and LinkedIn posts to guide my learning about artificial intelligence, machine learning, blockchain and anything else that strikes my curiosity. Before I interview experts about how these will impact life science and then write about it, I need a basic understanding so I can ask better questions.
This morning while having coffee, I decided to dive into AI (via YouTube) to see what I could learn about what it really is and how it works. The first video from PBS made clear it’s not the robots taking over (yet?) I did notice however that there are a lot of videos predicting doom around that idea. I have not watched them.
From that video:
A machine is said to have artificial intelligence if it can interpret data, potentially learn from that data and use that knowledge to achieve specific goals.
From the AI used by online retailers to recommend products based on our past purchases, to the algorithms that decide which articles or videos to show us on social media, AI is constantly shaping our digital experiences. It also wrote that last sentence. It’s not surprising that the AI I knew I was using but take for granted, is everywhere.
My podcasts are transcribed by AI (Grade: B+). That same software can put my voice to text if I’m too lazy to record an intro for the episode (Grade: D+).
My photo editing software can recognize the sky and replace a boring sunset with something more spectacular (A+, but feels like cheating). The same trick replaces the background on your zoom calls. What is shocking about that is that it never seemed shocking. (Of course it can do that!) When a technology seems invisible, that’s how you know it’s working. It might also be a cause for concern.
In looking at some of the articles and posts I mentioned above, the term “ethical AI” appears fairly often (a good sign). It seems the dark side of AI isn’t as much the robot revolution as it is programming our algorithms with bad data, for example, biased in some way, whether intentional or not.
Should we treat people fairly when applying for a loan? Yes. Do we want doctors to make accurate diagnoses? Also yes. Algorithms are only as good as the data they are trained on, and if this data is biased, the algorithms will be too.
Here is an example that I found compelling. Google’s Smart Compose will help you write an email, like autofill on your phone or in your browser, suggesting words that you can accept or reject by continuing to type.
That algorithm has been trained (I imagine by looking at gazillions of emails) to suggest the next word. Supposedly it might eventually learn your personal writing style.
In lieu of selecting the machine’s suggestion, what if we would’ve said something in a different way that included our unique personality at that instance in time?
Will receiving an obvious autocomplete message result in the person on the receiving end feeling like they weren’t worth the other person’s time to generate an authentic response from scratch? Will they think: “Honestly, how much time does it take to type out a sentence?”
That might simply be insulting. But:
When using autocomplete, how do we ensure we’re not being manipulated and that bias from the training data is not influencing the suggestions?
When we default to machine-generated suggestions in lieu of human-generated responses, one can imagine how this is a step towards the dystopian world that we’d all like to avoid. This is a potentially dark, dangerous road to go down, and we need to consider the consequences before we overzealously accept the next suggestion bubble that pops up with: “Thanks!”
As you saw in my last post, I want to preserve as much of what is human as possible, while taking advantage of any help I can get to do things better and faster. Between coffee and lunch today, I learned a lot including the basics of how some machine learning works. (Apparently, I have reached the 1980s or maybe the 90s!)
I’ll share more about that in a future post. The good news is that it is understandable, at least at a high level. I won’t say I love math, but I did enjoy seeing how the calculus I have never used since college is useful to make the things I use everyday better.
BTW, this article was written with the help of AI. Software gave me an outline and wrote several paragraphs based on that outline after I provided a topic. I discarded almost all of it, either because: I wanted to use my own examples, the outline items weren’t aligned with my intent or I had no idea if what it was giving me was accurate. In fact, the software produced a paragraph calling out that risk:
Lack of transparency: One of the problems with AI is that the algorithms behind it are often opaque, meaning that people do not know how they work.
I believe this software works by having read lots of content from who knows where? Hence my concern about accuracy and possibly even plagiarism.
On the positive side however, it did help me overcome “the tyranny of the blank page”. I was able to get started quickly, think about a structure and consider some points that I might not have otherwise.
In case you want to see what some more of what AI written text looks like, here is a sample:
What is artificial intelligence (AI)? AI is a term that refers to the ability of machines to carry out tasks that normally require human intelligence, such as seeing, hearing, and speaking. It includes tasks such as recognizing people and objects, understanding natural language, and navigating complex environments. AI has been around for decades, but has only recently become widely used due to advances in technology.
What are some ways AI is embedded in our daily lives? AI is embedded in many aspects of people’s daily activities. For example, the face recognition technology that can be found on Facebook or LinkedIn software that automatically tags pictures of your friends and co-workers is an example of artificial intelligence. Personalized advertisements, news feeds, and…
My goal here is to educate and build a community interested in understanding AI for life science. I want to encourage discussion on this platform. Let me know in the comments if I got anything wrong. If I got it right, please consider subscribing and sharing.
One of the most difficult things to untangle about AI is that the underlying complexity of the algorithms spans a broad range. For product recommendations
(your example), the most basic predictive analysis would simply take Product X and stack-rank the other products most commonly purchased together with Product X and "recommend" those to the customer. The term then ends up getting diluted by groups who want to sell the recommendation software and use "AI" as an attention-grabber. This works because the advantages of more complex algorithms (better recommendations) sometimes are imperceptible to the end user when compared to simpler approaches. The commercial software is a black box so we can't compare. "AI" has become a great term for trend marketing but one that's increasingly less useful as a differentiator for the people buying AI solutions. Back in the 90s when we figured out that we should market diapers and beer together it was just called "sales analysis".