ChatGPT has become the AI that is impossible to ignore. As I wrote in my first post a little more than a year ago, I would be neither a cheerleader nor a detractor, but rather a curious observer. So here we are.
My main concern with respect to many technologies is the dilution of our humanity. In response to all the posts I was seeing where people were asking ChatGPT to create content, I wrote on LinkedIn that the value of your voice, even better with your face attached to it, would increase. The more AI-generated content is available, the more you will stand out, not to mention the benefits of a human interaction.
AI is part of the game, and while some will use it instead of writing, most are using it to get a starting point, to brainstorm ideas when they don't have folks.
Agree with putting a voice or visual on it, especially short snippets - you have to earn attention today, and while AI can sort of mimic your voice, it can't replace the inflections, passion, and unique humanity your voice delivers...
Declan and I had not chatted in quite some time so he DM’d me to propose a conversation on this topic which I thought (correctly) would be fun. This episode is a freestyle exploration of our thoughts about the possibilities - good and bad - for ChatGPT.
As he mentioned above, brainstorming and outlining may be great uses of ChatGPT. A quick first look at current thinking around a topic can get the juices flowing. And if the answers aren’t satisfying, we will slowly learn to ask better questions, provide better prompts.
I also wondered about how future versions will be trained when so much of the information it is trained on was generated by a previous version. Will it know when it’s looking at something that was AI generated?
In this Wired article, Phil Libin, former CEO of Evernote was said to be enthusiastic about AI in general but shared my concerns:
“All of these models are about to shit all over their own training data,” he says. “We’re about to be flooded with a tsunami of bullshit.” -Phil Libin
Listening back to our conversation, Declan mentions some interesting uses for life science researchers in terms of improving both the inputs and the outputs (presentation) of their work. He also raised the question of does an AI understand the audience? As humans, we tailor our message for the audience. Are their hints of the audience in the questions we ask? Maybe, but not always.
There is no shortage of opinions on this new tool. I found this podcast from Scott Galloway enlightening:
And in the next episode of Life Science Marketing Radio, I’ll talk to David Nathan who used ChatGPT to write a children’s book in the carpool lane and generated an illustration through Midjourney a generative AI for images.
If you are as interested in this episode and subject as I have been, I hope you share your thoughts below!
Chat with Chris about content for your life science brand
Share this post