What is the AI equivalent of a moon landing?
I began to wonder about this yesterday based on a post by David Kincaid on LinkedIn. He shared an article1 from the Brookings Institute about what the US needs to do to dominate in AI.
The article looked at a number of factors and rated countries on their progress toward their national AI strategies. Progress was mapped on two axes, technology and people. The US, while ranking high for its technology infrastructure, lags well behind in terms of being prepared with human capital. In other words, the U.S. isn’t producing enough STEM graduates to meet those strategic aims. From another article in the series:
“The U.S., while a leader in the technology dimension, particularly in the sub-dimensions of investments and patents, ranks a relatively dismal 15th place after such countries as Russia, Portugal, and Sweden in the people dimension. This is especially clear in the sub-dimension of STEM graduates, where it ranks near the bottom. While the vast U.S. spending advantage has given it an early lead in the technology dimensions, we suspect that the overall lack of STEM-qualified individuals is likely to significantly constrain the U.S. in achieving its strategic goals in the future.”
If you know me, I’m not a “let’s dominate!” kind of guy, but I do recognize the need to be competitive. One of the takeaways from the first article is that the US should adopt the mentality of the space race that gripped the country after the Soviet Union launched its first manned mission.
Here is the difference. The urgency of the space race was clearer as rockets were new and an easily imagined threat. The invisibility of AI makes it more difficult for the public to assess our relative strength.
The space race also had a visible goal (and some not so visible, I'm sure). What does the destination for national AI supremacy look like? How do we stir the imagination in the same way as going to the moon?
A moonshot is solving a lot of problems for a common goal. To a non-expert like me, it looks like every AI project is an unrelated problem.
Now the article has me thinking. It seems like there are two approaches that might be useful: go deep and or go wide.
The moon landing is the deep approach. A singular goal with a defined endpoint. This suggests a goal that anyone can imagine themselves doing, no matter how unlikely. I dreamed of being an astronaut like any kid my age in the 1960s and followed every mission closely.
That brings up another challenge. Back then, there were only 3 TV channels and they all covered the same thing. Although Jules Bergman on ABC was the best! Even if we come up the single inspirational project, breaking through the noise (not to mention the inevitable conspiracy theories) will be difficult.
The deep approach will be successful when we’re at the water cooler saying “Did you see (insert name here) do XYZ on the news last night?”
The wide approach is something like the interstate highway system. What advancements brought about by AI would make life better for everyone?
Here is a list of challenges that come to mind.
Cure cancer
Counter disinformation
Improve health for everyone
Is usable nuclear fusion too big of an ask?
Affordable higher education (reducing student debt)
Reduce time spent in traffic (even a climate denier should agree to this)
I personally think the last one is achievable. Analysis of driving patterns (my phone knows where I’m going on certain days at specific times) and sensors that let cars talk to each other (IoT) could combine to tell us when to leave, what route to take and manage the flow of traffic along the way.
But wait. I still have a few questions.
Why do we assume that more graduates in STEM fields is the answer?
Don’t we also need people who understand behavioral economics, sociology, etc. to provide input to solve those problems?
What are the most important problems to be solved?
Do we need AI to solve them?
I’d love to hear your thoughts. If I got anything wrong, missed a key point or you just think I’m an idiot on a rant, let me know in the comments.
I’m still learning and I will bring it all back to life science eventually. I promise.