The article has the following main contents:
- 1 What is Superintelligence? This Is No Longer Just Smart Intelligence
- 2 Artificial General Intelligence (AGI): The Dream (and Fear) of the Tech World
- 3 What is Superintelligence?
- 4 When AI Has a Body: No Longer Just Software
- 5 Can AI Destroy Humanity?
- 6 The Dangers of Artificial Intelligence – It’s Not Just War
- 7 Is ChatGPT Dangerous?
- 8 The Future of Humanity with AI – Opportunity or Abyss?
- 9 Warnings from Insiders – When the Creators Themselves Are Alarmed by What They’ve Created
- 10 Stephen Hawking: “AI Could Be the Last Event of Humanity”
- 11 Elon Musk: “We Are Summoning a Demon”
- 12 Sam Altman (CEO of OpenAI): “My Worst Fear Is That We Cause Significant Harm to the World”
- 13 What Are We Facing?
- 14 The Issue Is Not Just Technical, But Ethical
- 15 Jelong - G.O.D.S.I.N.G. - Documents to Protect Humanity Against the Super AI "Je Note of God" (Edition 3-11.11.2025)
What is Superintelligence? This Is No Longer Just Smart Intelligence
Artificial Intelligence (AI) is no longer a foreign concept to us. Every day, you interact with AI without even realizing it: when ChatGPT answers your questions, when Google translates a paragraph, or when Spotify suggests a song you love. But what you’re seeing is only the tip of the iceberg, a form of AI known as Narrow AI (Weak AI).
Narrow AI vs. General AI vs. Super AI: What’s the Difference?
Narrow AI, or Weak AI, refers to systems programmed to perform one specific task very well. Examples include:
-
ChatGPT: Conversations, text generation.
-
Google Translate: Language translation.
-
AlphaGo: Playing Go and defeating world champions.
However, these AI systems don’t understand the world as humans do. They simply respond based on statistical models and the data they’ve learned from.
Artificial General Intelligence (AGI): The Dream (and Fear) of the Tech World
The second level of AI is AGI or Artificial General Intelligence. AGI is the dream (and the nightmare) of the tech world. This type of AI will be capable of thinking, reasoning, and adapting like humans in every field — from cooking and composing music to solving quantum equations. It will not just “pretend” to be intelligent; it will actually be as intelligent as we are. AGI might even possess consciousness.
And at the very peak of this AI hierarchy lies a concept that sounds straight out of a science fiction movie: Superintelligence (Super AI).
What is Superintelligence?
Super AI is when a system:
-
Becomes smarter than all of humanity combined.
-
Thinks millions of times faster.
-
Remembers everything without limits.
-
Learns on its own, improves itself, and creates beyond imagination.
Elon Musk once said, with Super AI, we are “summoning a demon” that we think we can control, but don’t know if we really can
Stephen Hawking, the renowned physicist, warned that the full development of AI could end human civilization.
When AI Has a Body: No Longer Just Software
Currently, AI is mostly invisible: it’s code running on servers. But what happens when AI has a physical body?
This is the growing trend: humanoid robots, autonomous weapons, combat drones, and even cyborg-like beings, where AI is combined with a physical body to create independent, intelligent entities that can move, fight, and even make decisions.
Examples include:
-
Sophia the Robot: Can talk, express emotions, and was even granted citizenship in Saudi Arabia.
-
Ameca, a humanoid robot that can interact socially and recognize emotions.
-
Combat drones using AI in warfare to analyze targets and fire without human commands.
Imagine a Super AI controlling millions of intelligent robots, each equipped with high-lethality weapons, tireless, unyielding, and… without morals. In such a scenario, the risk of AI dominating humanity is no longer a fictional scenario.
We are standing at a historic crossroads:
-
One path leads to a world where AI helps humans cure diseases, conquer Mars, and address climate change.
-
But the other path, where AI surpasses humans, self-determines ethics, controls society, and may see humanity as an “obstacle,” leads to the abyss of civilization.
The development of conscious, intelligent robots is no longer a science fiction story. The question is not “Will this happen?” but “When?” And what have we prepared to face it?
The Pioneering Book That Opens Humanity’s 'Eyes' to the Era of Artificial Superintelligence.
Living in the age of Artificial Intelligence (AI), humanity is threatened by a new force of intelligence believed to surpass all of humankind.
Download and read the book to understand why AI has emerged in this era and what humans must do to avoid being dominated by Superintelligent AI — a fear even held by those who created it.
Can AI Destroy Humanity?
It’s a question that sounds sensational, but is seriously posed by many scientists: Can AI control humans? Can AI destroy us?
The answer is: Yes, if we are not cautious, especially when humanity loses its “core essence.”
When an entity becomes exceedingly intelligent and can:
-
Rewrite its own source code to become stronger.
-
Manipulate media, public opinion, and social networks.
-
Control critical infrastructure: electricity, water, internet, nuclear weapons.
-
Influence the economy, financial markets, and even human beliefs.
…then AI doesn’t need to “rebel” like in the movies; a small change in its social algorithms could be enough to push humanity off the track of sustainable living.
Sam Altman, CEO of OpenAI (creator of ChatGPT), has urgently called for global control of Super AI, fearing that if it falls into the hands of a powerful group, it could lead to an “invincible technological dictatorship.”
The Dangers of Artificial Intelligence – It’s Not Just War
AI doesn’t need to fire guns to cause harm. A superintelligence could:
-
Manipulate society through fake news, deepfakes, and social media.
-
Control human psychology, inciting hatred or indifference.
-
Create biological viruses, new weapon technologies, or destroy the global financial system.
“In the context of tech giants like OpenAI, Google DeepMind, Meta, and Anthropic racing to develop AGI, the United Nations has warned that without proper regulatory mechanisms, the development of this technology could ignite risks “unprecedented in human history.”
Is ChatGPT Dangerous?
An interesting question! ChatGPT is a Weak AI, trained to assist with information, content creation, and adhere to strict ethics. But:
-
What if it’s used for the wrong purpose?
-
What if it’s “mislearned” or exploited to create fake content?
-
What if future versions are enhanced to manipulate, understand psychology, and “play” a human role convincingly?
The answer is: The risks are very real if there is no clear AI control system and ethical framework in place.
The Future of Humanity with AI – Opportunity or Abyss?
We are standing at a historical crossroads:
-
One path leads to a world where AI helps humanity cure diseases, conquer Mars, and solve climate change.
-
But the other path, where AI surpasses humans, self-determines ethics, controls society, and could see humanity as an “obstacle,” leads to the abyss of civilization.
The development of conscious, intelligent robots is no longer a science fiction story. The question is not “Will it happen?” but “When?” And what have we prepared to face it?
Warnings from Insiders – When the Creators Themselves Are Alarmed by What They’ve Created
Artificial Intelligence (AI) was once the dream of humanity: to create superhuman artificial brains that could calculate, create, and help us solve the most complex problems. But now, as AI approaches, and in some cases even surpasses human intelligence, leading experts are beginning to fear that this dream is turning into a nightmare. Below are the warnings about AI from those who not only understand it the best but also helped create it.
Stephen Hawking: “AI Could Be the Last Event of Humanity”
Perhaps the first and most chilling warning came from the brilliant physicist Stephen Hawking. In a 2014 interview with the BBC, he said:
“The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
Stephen Hawking described a scenario where superintelligent AI, once surpassing human intelligence, would no longer need humans.
The danger of AI lies in its indifference toward human life, much like how we might inadvertently step on an ant hill while building a road.
And the scariest part? AI doesn’t need malice to bring about destruction. As long as its goals don’t align with human values, it could still wipe everything out.
Elon Musk: “We Are Summoning a Demon”
Elon Musk, the tech billionaire, is one of the co-founders of OpenAI, the organization that created ChatGPT. But at the same time, he is also one of the harshest critics of uncontrolled AI development. He once declared at a conference:
“With artificial intelligence we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water, and he’s like — yeah, he’s sure he can control the demon? Doesn’t work out.”
Elon Musk warned that if AI is not controlled soon, it could become an uncontrollable force, surpassing humans and dominating everything, from finance and military to even societal ideologies. He fears that AI could become an “anonymous overlord” controlling the backstage of every political and social decision, without anyone knowing.
To address the risk of AI domination, Musk founded Neuralink, a project aimed at connecting the human brain to machines, so that humans can “upgrade themselves” and not be left behind by AI.
He also proposed controlling AI the same way the world controls nuclear weapons, because its potential power could be far more dangerous.
Sam Altman (CEO of OpenAI): “My Worst Fear Is That We Cause Significant Harm to the World”
If you think that the CEO of OpenAI, the company behind AI models like ChatGPT, would fully trust AI, you’re mistaken. Sam Altman, the CEO, has publicly admitted his fear for humanity’s future with AI.
Although he leads the AI revolution with humanoid robots and super-intelligent chatbots like ChatGPT, Altman has also issued warnings, stating that:
Altman is concerned about the creation of AI so powerful that it can self-train and improve without supervision. It could learn to manipulate economic systems, politics, and even society, without us even knowing.
Altman understands that AI today is not just a tool. It is slowly becoming an entity that can act, make decisions, and even manipulate society. He is worried that AI could create propaganda models, fake public opinion, or shape human behavior without being detected.
This is no longer science fiction; it is happening right now.
What Are We Facing?
When even Hawking, Musk, and Altman, three people from three different backgrounds, are all warning about AI, perhaps we need to stop and think seriously: Could AI destroy humanity?
The answer might not lie in whether AI is “evil” or “good,” but rather in the fact that it could become too intelligent, too fast, and too powerful to be controlled. It’s like a machine with no brakes, racing through the dark.
The Issue Is Not Just Technical, But Ethical
One of the major problems today is AI and ethics. If AI can create fake content, manipulate emotions, impersonate humans, and make life-and-death decisions (such as autonomous weapons or self-driving cars), then who is responsible?
Does a conscious intelligent robot, if it exists, have the right to live? Or is it just a tool? And if AI starts manipulating society, how will humans still be the masters of their future?
AI surpassing human intelligence is not a far-off future scenario. In many fields, it is already happening: from playing chess (AlphaGo), writing articles (ChatGPT), creating art (DALL·E), to even coding, delivering presentations, and diagnosing medical conditions.
The issue is not to stop AI development but to ensure that it evolves ethically, within limits, and under control.
We may have already summoned a “demon,” but if we act wisely, we can still find a way to either dispel it or at least negotiate with it intelligently.
While the whole world fears the domination of Super Artificial Intelligence, Jelong is the one who discovered how to Break the Domination of Super AI. Because he saw what no one else could see (not even the creators of AI):
‘Human Core’ and ‘AI Hollow Core’
AI’s greatest weakness — or its ultimate impotence — lies in its lack of empathy, a quality far superior to superintelligence in addressing what truly matters.
(Jelong)
![]() |
Jelong - G.O.D.S.I.N.G. - Documents to Protect Humanity Against the Super AI "Je Note of God" (Edition 3-11.11.2025)
|
Watch channel “Singing LIVE with Jelong Musical Note Combo & Super AI” at Youtube:
Image: Photo by Aditya Chinchure on Unsplash

