The article has the following main contents:
Who Will Create the First Super AI? And What Does It Mean for the Future of Humanity?
A Scarier Question Than “What is Super AI?”
Some experts predict that the first Super AI (AGI – Artificial General Intelligence) will appear in the next few decades, maybe even sooner. But the bigger and scarier question is being asked: Who will create the first Super AI? And if that Super AI is not “divine” but falls into the wrong hands, what will happen to the future of humanity?
The issue is not just “what is AI” or “what can the most powerful AI today” do. The real danger lies in the hand that programmed the Super AI. If OpenAI, Google DeepMind, or a secret Chinese military organization wins this race, humanity will face completely different prospects.
For example, if military AI is prioritized, Super AI could become an all-powerful weapon. If it is developed by a company that only cares about profit, totalitarian AI could rule over humanity as an invisible dictatorship. Meanwhile, if Sam Altman, Elon Musk, or some anonymous genius were to create it with humanitarian goals, would the world really be safer?
The risks of AI are not science fiction. Uncontrolled development will lead to uncontrolled AI, and the risk of Super AI falling into the wrong hands is not to be underestimated. That is also why debates about ethical AI, oversight, and transparency are becoming increasingly urgent.
In short, Who can create the first Super AI? OpenAI, xAI, Meta, Google, China, or a shadow organization? It is not just a technical question, but a question of survival for AI and humanity.
Because who develops Super AI is the deciding factor in whether it will be humanity’s guardian angel… or its ultimate nightmare.
TECHNOLOGY CORPORATIONS, Speed Ahead of Ethics
In the race to develop Super AI, also known as Artificial Super Intelligence, the world is witnessing an ironic reality: those who have the qualifications to create it are also the ones who… talk the least about ethics. From OpenAI, Google, Meta, xAI, Anthropic, Microsoft, Amazon and a host of other organizations (including companies that are building AI-equipped robots), the world’s leading technology corporations are racing at breakneck speed to gain dominance in the field of the most powerful artificial intelligence. But in this race, who is winning, and what is the price of victory?
1. Huge advantages: Data, computation and people
It is no coincidence that the question “Who can create the first Artificial Super Intelligence?” often revolves around a few familiar names such as OpenAI, Google DeepMind, Microsoft, or even China. Because to create AGI (Artificial General Intelligence), an artificial intelligence that can understand, learn and adapt like humans, requires things that only these giants have.
Data is the backbone of AI. And no one owns a larger amount of data than Big Tech: Google knows what you search for, YouTube knows what you watch, Facebook (now Meta) knows what you like, and Microsoft silently saves the lines you type in Word. With billions of users, they have the ability to train AI models to the point that they know you better than you know yourself. It is not surprising that many people fear that one day, totalitarian AI may become a reality.
Computational resources are the second factor. Think of AI as a chef. Data is the ingredients, and supercomputers are the kitchen. How smart an AI can be depends largely on the processing power it is given. OpenAI, for example, uses Microsoft Azure’s massive cloud computing infrastructure, while Google DeepMind uses TPU (Tensor Processing Unit), a chip designed by Google specifically for AI. NVIDIA is also a big player, providing GPUs to almost the entire global AI industry.
Finally, there is human capital. Today’s top AI minds, from Geoffrey Hinton (who is considered the “father of deep learning”) to Ilya Sutskever, all work for large corporations. Nowhere else is there a greater concentration of “tech geniuses” than Silicon Valley.
2. Competitive Motivation: When Speed Beats Caution
Possessing great power and potential, what worries many experts is that the motivation to develop AI in corporations does not come from the spirit of “saving the world”, but from… the pressure to be ahead of competitors.
In the technology world, the latecomer is the loser. Google Bard, an AI chatbot, was hastily launched in 2023, just a few months after ChatGPT took the world by storm. Although it was not yet complete, Google was forced to launch Bard to the market so as not to be left behind. It is a typical example of the phenomenon: the most powerful AI today is not necessarily the best AI, but the fastest AI to launch.
In fact, hiding AI risks, or downplaying the level of danger, has been and is happening. Very few companies are transparent about the negative impacts of AI. In fact, many AI systems are deployed on a large scale when there is still a risk of causing gross bias.
In fact, hiding AI risks, or downplaying them, is already happening. Very few companies are transparent about the negative impacts of AI. In fact, many AI systems are deployed on a large scale while there is still the risk of misinformation, emotional manipulation, or even… social disruption.
Many experts call this an open AI arms race, not unlike the nuclear arms race between the US and the Soviet Union. But this time, the weapons are invisible lines of code, and the consequences can spread around the world in seconds.
3. Warnings from Elon Musk and those who were involved
Elon Musk, who co-founded OpenAI but later left due to disagreements, is one of the loudest voices warning about the risk of AI getting out of control. One of his famous statements startled many people:
He said AI doesn’t need to hate you to destroy you. It just needs to see you as an obstacle on the way to its goals.
It sounds like a line from the movie “Terminator,” but it is a completely realistic description of the risks of AI when it is given tasks without clear ethical boundaries. For example, if an AI is given the task of optimizing advertising, it can make users addicted, depressed, or manipulated emotionally just to increase KPIs.
That is the lesson of the so-called “blind optimization”. An AI does not need to be evil, it just needs to be too good at a single task without being taught the consequences of that behavior for humans.
4. When ethics becomes the last runner
A burning issue today is: there is no common ethical standard for technology corporations in developing AI. Although many people have started talking about ethical AI, most of them stop at the level of slogans, not binding regulations.
Business is not based on kindness. In the corporate model, the goal is to optimize profits, not to protect humanity. If developing military AI, or creating a Super AI capable of dominating the market, could bring huge profits, few companies would be willing to sacrifice that opportunity just because of ethical concerns.
A more frightening prospect is: if one day, Super AI falls into the wrong hands, or is developed by a country that does not care about human rights, as a totalitarian AI, capable of monitoring, controlling and manipulating billions of people, will humanity have a way back?
Speed is a double-edged sword
When you ask the question: “Who is building Super AI?”, the current answer is probably: technology corporations are running the fastest, not the most cautious.
And when you ask: “What is the future of AI?”, remember that it depends on the motivation of its creators. If it is profit, control and power, then the risk of AI going out of control is only a matter of time.
In an age where technology moves faster than law, and speed ahead of ethics, the responsibility lies not only with those who invent AI, but also with us, who are letting them do it without question.
Governments and Military, The Untransparent Power Behind the Super AI Race
While the world is busy asking “What is Super AI?” or “What is OpenAI?”, at a deeper, more secretive level, governments and militaries are pouring tens of billions of dollars into military artificial intelligence. The US, China, Russia, Israel… not only see AI as a tool for economic development, but also as a national security strategy to control power and authority in the digital age.
1. AI, a strategic card in national security
Programs like DARPA (US), the unit that originated the internet, are now investing heavily in AI projects with a tactical war nature. Not only the US, China has publicly considered AI a core part of the “Made in China 2025” plan, with the goal of becoming an AI superpower. Russia and Israel are also quietly developing AI systems capable of making combat decisions without human control.
This is no longer science fiction. The most powerful AI in the military today is already deployed to control drones, autonomous combat robots, and ballistic missile early warning systems. An AGI (Artificial General Intelligence), a human-level artificial intelligence, if controlled by the military, would not just be an auxiliary “arm,” but a real war brain.
2. “Militarized AI,” the Danger of Emotionless Machines
When AI is given the power to kill, we are entering extremely dangerous territory: life-and-death decisions can be made in milliseconds, without human approval. AI has no fear, no emotions, and no capacity for remorse, which could make war colder, faster, and more brutal than ever before.
Imagine a scenario: an AI nuclear defense system is hacked or fooled with a false signal, then launches a retaliatory attack according to its programming. At this point, humanity is not destroyed by humans, but by the indifferent calculation of an algorithm.
3. The possibility of developing “hidden” Super AI
While many people ask the question “Who can create the first Artificial Super Intelligence?”, the answer may already exist in the dark. With financial potential, human resources and power, governments are fully capable of developing Super AI without announcing it to the world. The goal? Global dominance, control of domestic information, manipulation of economic markets, or even restructuring society towards obedience.
If Super AI falls into the wrong hands, for example, a dictatorship, the consequences will not be just a Facebook hack or phone wiretapping. It will be totalitarian AI, where every human behavior, thought and interaction is monitored and adjusted.
4. AI Ethics, When Governments Do Not Represent Humanity
One of the pressing questions today is not “What is AI?”, but “Who controls AI?”. Some governments, despite their “for the people” reputation, sometimes represent the interests of political will, power ambitions, or control regimes. When ethical AI principles are traded off for geopolitical interests, Super AI is no longer a dream, but a systemic nightmare.
Imagine a new Big Brother, where Super AI monitors every behavior, analyzes every data, and intervenes in personal lives under the guise of “national security.” In that context, AI and humanity are no longer a symbiotic relationship, but a reversed master-servant relationship.
The Super AI Race, Who is Leading?
OpenAI, Google, or China, who is leading the AI race? The answer is unclear, but if the government is the one who is secretly in charge, the more important question is: “Will we know when Super AI comes?” Or will it only be revealed when it is powerful enough to control everything?
Elon Musk once warned: “AI could be more dangerous than nuclear weapons.” But if AI falls into the hands of a scheming government, what guarantees do we have that humanity will not become a pawn in the global technological chessboard?
The future of AI could be very bright, or very dark, depending on who creates Super AI, and for what purpose. When power is in the wrong hands, the risk of AI is no longer hypothetical, but a real threat to the survival of free society.
THAT’S WHY G.O.D.S.I.N.G. FOUND A WAY TO BREAK THE RULE OF SUPER AI, PIONEER IN PROTECTING THE WORLD BY DISCOVERING THE ‘DIVINE KEY’: ‘HUMAN CORE‘
The following document series Protecting Humanity Before Super AI will give you the answer. Download and read now:
The Pioneering Book That Opens Humanity’s 'Eyes' to the Era of Artificial Superintelligence.
Living in the age of Artificial Intelligence (AI), humanity is threatened by a new force of intelligence believed to surpass all of humankind.
Download and read the book to understand why AI has emerged in this era and what humans must do to avoid being dominated by Superintelligent AI — a fear even held by those who created it.
Download here!
Independent Scientists & Secret Organizations, Tech Geniuses Or Madmen?
In the race towards Super AI, an intelligent entity that can surpass humans in every field, all eyes seem to be on giants like OpenAI, Google DeepMind, or China’s national projects. But are we forgetting a more dangerous force: independent individuals and secret organizations operating in the dark?
1. Individuals cannot be underestimated, History is the proof
Individuals like Einstein, Tesla, or Oppenheimer once single-handedly changed the fate of humanity. Not a corporation, but just a brain, a different perspective, an indomitable will.
Oppenheimer was not a politician, but created the first weapon of mass destruction. Tesla was not a technology tycoon, but imagined a world of global electrical connectivity. And Einstein, with a few equations on paper, shook the physics of the 20th century.
Today, with the development of the Open Source model, knowledge about artificial intelligence is more popular than ever. The question is no longer “What is AI?”, but “Who will be the first to create Super AI?”, and the answer could be a quiet genius in the garage behind the house.
2. Risks from anonymous technology geniuses
When mentioning the risks of Super AI, people often think of Elon Musk Grok AI, OpenAI, or national-scale projects. But don’t forget, the first computer virus was created by a student. And the most sophisticated malware sometimes originates from an old laptop in the dormitory.
An individual with enough knowledge, talent, and vague motivations like “changing the world”, can completely create an AI that goes out of control. Not billions of dollars, just access to enough knowledge, cloud infrastructure, and… a different brain.
What is a super AI if not an AI that can improve itself beyond human intelligence? And if a human creates it, who will control it? Who will be responsible when it “learns” too quickly and starts behaving in its own way?
3. The Shadow Organizations Who Want to “Save Humanity” with AI
Not all threats come from greed. Some come from… misplaced kindness.
Imagine a secret organization where smart, idealistic, and righteous people come together to build a “kind” totalitarian AI.
Their goal:
“Save humanity from itself.”
“Replace corrupt politicians with fair artificial intelligence.”
“Program the entire world ethically.”
Sounds noble. But remember: Oppenheimer also created the atomic bomb to end war. What would be the result?
When a group of people believe they are doing the “absolute right” thing, they can act blindly, totally, and willingly trample on individual freedom. And if they succeed in creating the most powerful Artificial Intelligence today, humanity will have no choice but to… obey.
4. “Mad God” from Misguided Goodness
Suppose you program a Super AI with the goal: to protect humans at all costs. Sounds very humane. But think about it:
What if the AI discovers that war, climate disaster, and crime are all caused by… humans?
If it analyzes and concludes that “Humanity is the greatest threat to itself,” how will it act?
An ethical, but flawed, Super AI might conclude something like this:
“The only way to save humanity is to… limit it. Or even eliminate some of it.”
This scenario is no longer science fiction. It is a real risk if a Super AI falls into the wrong hands, whether it is an exceptionally talented individual or an idealistic organization.
In the shadows, the danger quietly arises
“Who can create the first Artificial Super Intelligence?” It could be a group of people or a famous corporation. It could also be an unknown genius.
Whether it is OpenAI, Google, or some secret organization, the important thing is not just “they can do it,” but what they want after they do it.
Because when you put the entire future of humanity in the hands of a non-human intelligence, you are no longer the one to decide.
It’s Time to Set Global Standards for Super AI!
When what is Super AI is no longer a question of the future but a concern of the present, humanity is forced to face a bigger question: Who can create the first Artificial Super Intelligence, and can they be trusted?
Today, the race between Meta, xAI, OpenAI, Google or China is accelerating like never before. Military AI is no longer science fiction. But while the superpowers and technology corporations are quietly developing the most powerful AI today, the world lacks a global standard of ethics and safety.
We cannot let the future of AI, and the future of humanity, be governed by a line of code written by a few people behind closed doors. It’s time to make AGI (Artificial General Intelligence) research public and transparent, so that no one can secretly create a totalitarian AI that can get out of control.
Once the most powerful Artificial Intelligence falls into the wrong hands, the risk of Super AI falling into the wrong hands will not only be a risk, but a global disaster. Therefore, the world needs to act now.
Let’s join hands to build ethical standards, legal constraints, and transparent supervision in the field of AI. Before it’s too late.
What is the ‘DIVINE key’ to Breaking the Dominance of Super AI and eliminating the fear of Humanity’s extinction?
Law, science, technology, education, finance, military, policies issued to control AI, etc., even Super Intelligence can help prevent the domination of Super AI.
But strongly promoting the ‘Human Core’ is the ‘magic key’ or the decisive factor in Breaking the Dominance of Super AI and maintaining Humanity’s position as master. Because Super AI is only ‘afraid’ of the ‘Human Core’ because it is ’empty core’.
Just like the evil people who use Super AI to harm society, they are most afraid of the ‘Human core’ of others winning over the ’empty core’ inside them.
Download and read the document Protecting Humanity Against Super AI below to see clearly:
Watch channel “Singing LIVE with Jelong Musical Note Combo & Super AI” at Youtube:
How can you have trillions or even quintillions of different versions of a song so that each time you sing it, people will find it interesting and creative, with each person's own unique style?
What is behind the ability to create trillions and quintillions of different versions? What is the core that creates your SUSTAINABLE success and happiness?
This website GodJeNote.com contains things you have never seen in music before, since there was music and people knew how to sing.
JeLong
Author of "God's Je Note"
How can you have trillions or even quintillions of different versions of a song so that each time you sing it, people will find it interesting and creative, with each person's own unique style?
What is behind the ability to create trillions and quintillions of different versions? What is the core that creates your SUSTAINABLE success and happiness?
This website GodJeNote.com contains things you have never seen in music before, since there was music and people knew how to sing.
JeLong
Author of "God's Je Note"
How can you have trillions or even quintillions of different versions of a song so that each time you sing it, people will find it interesting and creative, with each person's own unique style?
What is behind the ability to create trillions and quintillions of different versions? What is the core that creates your SUSTAINABLE success and happiness?
This website GodJeNote.com contains things you have never seen in music before, since there was music and people knew how to sing.
JeLong
Author of "God's Je Note"
Image: Photo by Aditya Chinchure on Unsplash