First, the good news. The world is getting regulated when it comes to AI applications and development. It’s different regulations in different parts of the world, but at least there are some attempts to enforce control over AI.
However, there is not a doubt in my mind that there is an arms race for AI supremacy and especially for the military superpowers there is an enormous focus on being the best at AI. Why? Simply because nobody can afford not to.
This is exactly what happened during World War II, when the atomic bomb was developed. If you haven’t you must watch the fantastic movie about Oppenheimer. There are so many parallells to what was going on back then and what is going on now in regards to AI development.
For one thing, during the Manhattan project, the scientists calculated that there could be a risk that they would in fact ignite the atmosphere. The risk was not high, but after all it was not zero. That risk was lower than the risk we see today that AI will wipe out humanity.
At the same time, no military force could not develop AI-powered weaponry. They would immediately fall behind. Look at the recent development of drones in Ukraine. There are many that say that we will soon see fully automated swarms with drones. If one side would not develop AI applications to either support such an attack, or fight it off, they would loose.
The problem is, as I’ve been saying for quite some time now and I agree with many prominent AI scientists, that AI can without a doubt wipe out humanity. This is a result of the ability in AI to improve itself. It might not be there today, but as we saw with Chat GPT – all of a sudden a new AI application is out in the wild and it was for sure far better than anyone could have imagined. Before Chat GPT became publicly available, this kind of AI-power was still believed to be in the future.
Regulations will control some of the AI-development, at least in some parts of the world. But still, there is an arms race and I’m 100% convinced that the most advanced AI has not been revealed – it’s still in the shadows. That advanced AI is something regulations have no effect on.
What we need is something similar to the Treaty on the Non-Proliferation of Nuclear Weapons. We need to have an agreement where all countries of the world agree to control AI in an ethical way. Weapons powered by an AI must not be developed, or we could really see the end of humanity.
Do you agree or think otherwise? Please let me know.
Did you like this post? Don’t miss future posts – subscribe to my newsletter below and receive latest news every Friday morning.