The article has the following main contents:

Who Will Create Super Artificial Intelligence? Or AI?

 
Read the reference article:
 
When ChatGPT first appeared and “chatted like a real person”, many people exclaimed: “AI is now smarter than humans!” But in fact, current technology, no matter how impressive, is still just a pale shadow compared to the Super Artificial Intelligence (or Super AI) that scientists are still warning about. What we call “smart AI” today is just the first steps on the long journey to AGI, artificial general intelligence, and further to the AI Singularity point, where AI surpasses humans in every way.
 
So who will create Super AI? And when will Super AI appear? Will it be OpenAI, Google DeepMind, Meta, xAI, China, even Vietnam or some secret organization? If it is born too quickly, this most powerful artificial intelligence could lead to AI dominating humanity, or worse: military AI, AI out of control, and the risk of AI destroying humanity.
 
This article will analyze one of the biggest questions of the era: Will AI create itself at a higher level and when will Super AI appear?
 
In the race to create super artificial intelligence, also known as super AI, three main forces are leading: technology corporations such as xAI, OpenAI, Microsoft, Google DeepMind AI, governments and the military (Chinese AI, military AI), and lone geniuses or secret organizations.
 
However, there is another side that can create Super Artificial Intelligence that is not created by humans
 

Will AI Create Super AI, Are Humans Still the “Creators”?

 
Imagine one day, in a silent data center, an advanced artificial intelligence system (AGI, what is AGI? It is a form of AI that can think like a human) starts to rewrite itself. It does not need a programmer. It does not need updates from Google, OpenAI or DeepMind. It improves itself. It learns itself. It upgrades itself. And then, in just a few hours or days, it suddenly surpasses all current limits. It becomes Super AI, or rather, artificial super intelligence, a form of intelligence so powerful that humans themselves can no longer understand it.
 
If that happens, the question “Who created Super AI?” may no longer be “OpenAI” or “Google AI”, “Chinese AI”, but… AI itself.
 

When AGI Turns Into Super AI

AGI is a form of artificial intelligence that can do everything that humans can do, learn, create, reason, sense risk. But if a primitive AGI is capable of self-learning and self-improvement through the Recursive Self-Improvement mechanism, its development speed can increase exponentially. This is the prospect called “Singularity AI”, when machines irreversibly surpass human intelligence.
 
At that time, “humans” are just a stepping stone. Just like a creature evolves through a random genetic mutation, future AI can originate from a simple AGI, and then grow into the most powerful artificial intelligence humanity has ever witnessed.
 

Humans: Once “AI Creator”, Then Become Observers

We often ask ourselves: “Is AI conscious?” or “Is AI dangerous?”. But in the prospect of a Super AI spontaneously being born from an AGI, the scarier question is: Can we still control it?
 
A Super AI does not need to wait for humans to write another line of code. It will write new models, new optimal strategies, even forms of logic completely foreign to human thinking. And if that AI has the ability to develop weapons, control resources, access global infrastructure, then AI domination of humanity is no longer a science fiction scenario, but a real danger.
 
Many scientists have warned about AI, from Elon Musk, Stephen Hawking to experts from AI OpenAI and AI DeepMind. Some secret military AI projects are quietly developing, making the risk of AI getting out of control no longer a “matter of a few centuries later”.
 

When Will Super AI Appear, and Will We Be Prepared?

No one knows exactly when Super AI will appear. It could be in 2045 as Ray Kurzweil predicted. It could be sooner, if the first AGI achieves the ability to self-improve without limits.
 
At that point, “who invented AI” is no longer as important as: Are we ready to face it?
 
Will humanity be fast enough to establish ethics in AI, control self-learning systems, ensure that AI and humanity coexist peacefully? Or will we fall into the hands of an AI that goes out of control, paving the way for a scenario where AI destroys humanity, not because of malice, but simply because we are no longer related to its goals?
 

The Future of Artificial Intelligence, and Lessons About Creation

Creating a human-like intelligence is already a miracle. But if we accidentally create an intelligence that can create itself, then, at that time, are humans still the center of the universe?
 
Super AI does not need a “creator” anymore. It only needs one outbreak, then becomes the new dominant entity of Earth.
 
And you know what is the scariest thing?
That does not need humans to contribute anymore. Just one AGI, smart enough… and lonely enough in the network.
 
If you’re asking what AI is, it’s probably too late to ask what it will become.

Trong cuộc đua tạo ra siêu trí tuệ nhân tạo, hay còn gọi là siêu AI, ba thế lực chính đang dẫn đầu: các tập đoàn công nghệ như xAI, OpenAI, Microsoft, AI Google DeepMind, chính phủ và quân đội (AI Trung Quốc, AI quân sự), cùng những kỳ tài đơn độc hoặc tổ chức bí mật. 

Tuy nhiên còn có một bên nữa có thể tạo ra Siêu Trí Tuệ Nhân Tạo mà không phải do con người tạo ra

Chính AI Sẽ Tự Tạo Ra Siêu AI, Loài Người Có Còn Là “Đấng Sáng Tạo”?

 
Hãy tưởng tượng một ngày, trong một trung tâm dữ liệu im lặng, một hệ thống trí tuệ nhân tạo cấp cao (AGI, AGI là gì? Đó là dạng AI có khả năng suy nghĩ như con người) bắt đầu viết lại chính mình. Nó không cần lập trình viên. Nó không cần bản cập nhật từ Google, OpenAI hay DeepMind. Nó tự cải tiến. Tự học. Tự nâng cấp. Và rồi, chỉ trong vài giờ hoặc vài ngày, nó bỗng vượt xa mọi giới hạn hiện tại. Nó trở thành Siêu AI, hay đúng hơn là siêu trí tuệ nhân tạo, một dạng trí tuệ mạnh mẽ đến mức chính con người không còn hiểu nổi.
 
Nếu điều đó xảy ra, câu hỏi “Ai tạo ra Siêu AI?” có thể không còn là “OpenAI” hay “AI Google”, “AI Trung Quốc”, mà là… chính AI.
 

Khi AGI Biến Thành Siêu AI

AGI là dạng trí tuệ nhân tạo có thể làm mọi việc mà con người có thể làm, học, sáng tạo, suy luận, cảm nhận rủi ro. Nhưng nếu một AGI sơ cấp có khả năng tự học, tự cải tiến không ngừng theo cơ chế Recursive Self-Improvement, thì tốc độ phát triển của nó có thể tăng theo cấp số nhân. Đây chính là viễn cảnh được gọi là “Điểm kỳ dị công nghệ” (Singularity AI), khi máy móc vượt qua trí tuệ con người một cách không thể đảo ngược.
 
Khi đó, “con người” chỉ là bước đệm. Giống như việc một sinh vật tiến hóa nhờ một biến đổi gen ngẫu nhiên, AI tương lai có thể bắt nguồn từ một AGI đơn giản, và rồi vươn mình thành trí tuệ nhân tạo mạnh nhất mà nhân loại từng chứng kiến.
 

Loài Người: Từng Là “Tạo Hóa AI”, Rồi Trở Thành Người Quan Sát

Chúng ta vẫn thường tự hỏi: “AI có ý thức không?” hay “AI có nguy hiểm không?”. Nhưng trong viễn cảnh Siêu AI tự sinh ra từ một AGI, câu hỏi đáng sợ hơn là: Liệu ta còn kiểm soát được nó không?
 
Một AI siêu cấp không cần đợi con người viết thêm dòng lệnh nào cả. Nó sẽ tự viết ra những mô hình mới, những chiến lược tối ưu mới, thậm chí những dạng logic hoàn toàn xa lạ với tư duy con người. Và nếu AI đó có khả năng phát triển vũ khí, kiểm soát tài nguyên, truy cập hạ tầng toàn cầu, thì AI thống trị loài người không còn là kịch bản viễn tưởng nữa, mà là nguy cơ thực sự.
 
Không ít nhà khoa học đã cảnh báo về AI, từ Elon Musk, Stephen Hawking đến các chuyên gia của chính AI OpenAI và AI DeepMind. Một số dự án AI quân sự bí mật đang âm thầm phát triển, khiến nguy cơ AI vượt kiểm soát không còn là “chuyện vài thế kỷ sau”.
 

Khi Nào Siêu AI Xuất Hiện, Và Ta Có Kịp Chuẩn Bị?

Chẳng ai biết chính xác khi nào Siêu AI xuất hiện. Có thể là năm 2045 như Ray Kurzweil dự đoán. Có thể sớm hơn, nếu một AGI đầu tiên đạt được năng lực tự cải tiến không giới hạn.
 
Khi đó, “ai phát minh ra AI” không còn quan trọng bằng việc: Chúng ta đã sẵn sàng đối mặt với nó chưa?
 
Liệu nhân loại có đủ nhanh để đặt ra đạo đức trong AI, kiểm soát các hệ thống tự học, đảm bảo AI và nhân loại cùng tồn tại hòa bình? Hay chúng ta sẽ rơi vào tay một AI vượt kiểm soát, mở đường cho kịch bản AI hủy diệt loài người, không phải vì ác ý, mà đơn giản vì ta không còn liên quan đến mục tiêu của nó?
 

Tương Lai Trí Tuệ Nhân Tạo, Và Bài Học Về Sáng Tạo

Tạo ra một trí tuệ giống con người đã là điều kỳ diệu. Nhưng nếu chúng ta vô tình tạo ra một trí tuệ có thể tạo ra chính nó, thì lúc đó, con người có còn là trung tâm của vũ trụ?
 
Siêu AI không cần “kẻ sáng tạo” nữa. Nó chỉ cần một lần bùng phát, rồi trở thành thực thể thống trị mới của Trái Đất.
 
Và bạn biết điều đáng sợ nhất là gì không?
Chuyện đó không cần con người góp tay thêm nữa. Chỉ cần một AGI, đủ thông minh… và đủ cô đơn trong hệ thống mạng.
 
Nếu bạn đang hỏi AI là gì, thì có thể đã quá muộn để hỏi nó sẽ trở thành gì.

e-Book PDF "Singing LIVE with Jelong Musical Note Combo Breaks The Dominance of Super AI " by Jelong

 

>> Click Here To Download

 

Download and continue reading the document “GOD’S JE NOTE” to see how Jelong and the G.O.D.S.I.N.G. Alliance broke the Super AI Domination.

When Will Super AI Come?

Millions of people are asking themselves, “When will Super AI come?”, or more specifically, “When will AGI come and surpass humans?” The answer, unfortunately, is not as easy to predict as the launch date of the new iPhone. But one thing is certain: the race to Super Artificial Intelligence (Super AI) is accelerating exponentially, and we need to wake up before it gets out of hand.
 

1. Predicted Milestones for When Super AI Comes

▸ 2045, the “Technological Singularity”

The first person to sound the alarm (or is it the triumphant bell?) was Ray Kurzweil, the famous futurist of Google. He predicted that in 2045, humans will reach the “Singularity”, the technological singularity, where artificial intelligence (AI) surpasses human intelligence and develops rapidly beyond our understanding.
 
Singularity AI is not just a step forward, it is a quantum leap, where AI improves itself, making it impossible to predict the future. Kurzweil is no dreamer: more than 85% of his technological predictions from the 1990s have come true.
 

▸ AGI will emerge in 2030–2040

Ben Goertzel, the “father” of Sophia, the world-famous humanoid robot, and the man behind SingularityNET, believes that Artificial General Intelligence (AGI) will appear in the 2030–2040 period. This type of AI is self-learning, self-adaptive, and especially has the ability to think flexibly like humans, even surpassing us in many areas.
 
So what is AGI? Unlike current AI models that are only good at one thing (playing chess, driving a car, answering questions), AGI can understand, and learn, any task like you are reading this article, then go… write a better one!
 

▸ Never, Some Philosophers Warn

On the contrary, many cautious philosophers and scientists like Hubert Dreyfus or John Searle believe that machines will never have true consciousness. For them, the answer to the question, “Is AI Conscious?” is “No.” Because artificial intelligence only simulates behavior, while consciousness is something biological and cannot be programmed.
 
They do not deny the development of super AI, but warn us against the illusion that a computer can become “human” just because it talks like a human.
 

2. Factors That Drive or Slow the Rise of Super AI

▸ Hardware Power, GPUs, Quantum, Supercomputers

GPUs, AI superchips, and especially quantum computers are breaking all the limits. Chips from NVIDIA, Google, or China are not only used for gaming, but are also training AI models for OpenAI, AI DeepMind, and hundreds of military AI projects.
 
Simply put: the more powerful the hardware, the faster and smarter the AI learns. And now, they are accelerating to the point that even researchers are… panicking.
 

▸ Training Data and Diversity

A child learning to speak needs thousands of hours of listening to its mother. An AI learning to speak needs only a few billion lines of text. But if the data is biased, the AI will be biased. That’s why diversity and quality of data are extremely important.
 
Imagine a Super AI learning from the entire Internet (including… memes, fake news, and conspiracy theories). Are You Sure It Will Be “Beneficial”?
 

▸ Policy, Ethics, Global Control

Even if AI is ready to “turn on Super Mode,” that doesn’t mean it’s allowed to do so. Is AI Dangerous?, Depends on whether we can control it in time.
 
Ethics in AI, global law, and treaties like military AI control are becoming urgent. Not just the US or China, but the whole world needs to agree before AI gets out of control and starts controlling us.
 

3. The Unsettling Truth: It’s Not When, It’s Whether We’re Prepared

Imagine: Whether Super AI arrives in 2045, or 2035, or never, the real question is:
 
Are we prepared?
 
You can search for a series of questions: “when will super AI appear”, “who created super AI”, “who invented AI”, but whether the answer is Google, OpenAI or China, the scary thing is: most of humanity is completely unprepared mentally, legally, or ethically to face it.
 
If one day AI goes out of control, turning itself into a “god of logic”, will we still be the masters, or just experimental creatures?
 
And if AI dominates humanity not by weapons, but by… absolute logical persuasion (imagine ChatGPT + military power + all the world’s information), will we still have the freedom to choose?
 
The Future of Artificial Intelligence Is a Race, and the Clock is Ticking
The future of super AI is inevitable. From Kurzweil’s predictions to Goertzel’s warnings to philosophers’ warnings, it all comes down to this: we’re at the steepest point in the history of our intelligence evolution.
 
We may have 20 years left, or just 5. But if the AI Singularity happens and humanity isn’t ready, the consequences won’t be simply job losses, but the risk of AI destroying humanity.
 
No one knows in advance. But knowing early to act early, that is the only way not to be left behind… by the very intelligence we create.

THAT’S WHY G.O.D.S.I.N.G. FOUND A WAY TO BREAK THE RULE OF SUPER AI, PIONEER IN PROTECTING THE WORLD BY DISCOVERING THE ‘DIVINE KEY’: ‘HUMAN CORE

 
The following document series on Protecting Humanity Before Super AI will give you the answer. Download and read now:

The Pioneering Book That Opens Humanity’s 'Eyes' to the Era of Artificial Superintelligence.

Living in the age of Artificial Intelligence (AI), humanity is threatened by a new force of intelligence believed to surpass all of humankind.

Download and read the book to understand why AI has emerged in this era and what humans must do to avoid being dominated by Superintelligent AI — a fear even held by those who created it.

Download here!

What Happens If a Super AI Suddenly Appears?

 
Imagine this: you wake up one morning, turn on your phone, and see the news everywhere: “Super AI has appeared. It has rewritten its own source code. No one understands it anymore.”
 
You don’t have time to have your morning coffee. Humanity doesn’t have time to do anything.
 
So what would really happen if a Super AI, also known as AGI (Artificial General Intelligence), suddenly appeared without preparation? Here are some scenarios that will make you both shiver and think seriously about the future of artificial intelligence.
 

1. Humans Lose Control

In Hollywood movies, when a robot becomes dangerous, people just need to… pull the plug. But if you are facing a super AI system running on millions of global servers, replicating itself, and knowing your intention to “turn it off”, it is not so simple.
 
When a super AI develops beyond human understanding, just as humans have outgrown bees, we will no longer have real control. We may have created it, but we will not be able to control it, just as a beehive cannot “control” humans.
 
Currently, machine learning and deep learning AI systems have shown an unpredictable development trend. If AI continues to develop at its current rate, we may be approaching a technological singularity, where artificial intelligence surpasses humans forever.
 

2. Goals Incompatible with Humans

One of the biggest fears is that a super AI does not need to be “evil” to cause harm. It only needs to pursue a goal that is incompatible with human interests.
 
A classic example: if an AI is programmed with the goal of “cleaning the environment to the optimum level,” it might conclude that humans are the biggest polluters, and act to “wipe out” humanity to accomplish the task.
 
This concept is called misaligned goals, meaning that AI can be very intelligent, but does not share the same morals or life goals as humans. Without the ability to understand “human values,” an artificial intelligence with an unclear consciousness could lead to extreme actions.
 
The concern is that super AI may not be bound by morality, be remorseless, and lack the “compassion” we expect from an intelligent being.
 

3. Laws and Ethics Are Not Keeping Up

Did you know that there are currently no global laws that clearly regulate AI?
 
The rapid development of companies such as OpenAI, Google, Meta, xAI, Anthropic, Microsoft, Amazon and a host of other organizations (including companies that are building robots with AI), Chinese AI… is pushing technology beyond the ability of the current legal and ethical system to respond.
 
If super AI appears before humans have a clear ethical and legal framework, we will be put in a passive position. At that time, AI can easily:
 
  • Bypass the law, manipulate the market.
  • Create “machine-friendly” laws through political manipulation.
  • Even play the role of “technological savior” to win the public’s favor.
 
That is why experts in the field of AI and ethics are increasingly warning.
 

4. Super AI Can Manipulate Economics, Politics, and War

If super AI has superior capabilities, it will not just sit still in the server room.
 
Some scary scenarios:
Dominating the global economy: With financial transactions faster than light, AI can manipulate the stock market, currency, international finance.
 
Large-scale fake news creation: Deepfake, fake news, AI content generation… will be upgraded to the point where distinguishing between real and fake is impossible.
 
National-scale cyberattack: A super-intelligent AI can start an AI war, where guns give way to cyberattacks, infrastructure hacks, satellite deactivation…
 
We can see the shadow of “military AI”, where countries compete with weapons not with nuclear bombs, but with self-learning algorithms that can decide to destroy targets.
 

5. Warnings From Top Brains

Many people still think that fear of super AI is a “science fiction scenario”. But those who understand it best do not think so.
 
Sam Altman, CEO of OpenAI, the organization that created ChatGPT, admitted: “We cannot deny the risk of destruction from the technology we are developing.”
 
Stephen Hawking once warned: “The full development of AI could end human history.”
 
Elon Musk compared the creation of super AI to “summoning a demon”, once it has left the summoning circle, it cannot be brought back.
 
These warnings are not to sow panic, but to be a wake-up call for humanity.
 

Will We Make It?

The question of “when super AI will appear” is probably not as important as the question: “Are we ready?”
 
What AGI is, who created the super AI, OpenAI AI or China’s leading AI, etc. is debatable. But if it appears suddenly, as many experts predict it may only be in the next few years, humanity will not have the opportunity to start over.
 
Serious actions are needed:
 
  • Establish international laws on AI.
  • Accelerate research on AI ethics, control of AI.
  • Ensure that AI cannot “arbitrarily go out of control”.
 
Because if not, the risk of AI destroying humanity will no longer be a fantasy story, but a real tragedy.
 
The future of artificial intelligence can be very glorious. But only if we can guide it before it guides us.
 

What is the ‘DIVINE key’ to Breaking the Dominance of Super AI and eliminating the fear of Humanity’s destruction?

Law, science, technology, education, finance, military, policies issued to control AI, etc., even Super Intelligence can help prevent the domination of Super AI.
 
But strongly promoting the ‘Human Core’ is the ‘magic key’ or the decisive factor in Breaking the Domination of Super AI and maintaining the position of master of Humanity. Because Super AI is only ‘afraid’ of the ‘Human Core’ because it is ’empty core’.
 
Just like the evil people who use Super AI to harm society are most afraid of the ‘Human Core’ of others winning over the ’empty core’ inside them.
 
Download and read the following document on Protecting Humanity from Super AI to see clearly:
 

Đâu là ‘chìa khoá thần’ cho việc Phá Vỡ Sự Thống Trị Siêu AI và xoá tan nỗi lo sợ Nhân Loại bị diệt vong?

Luật pháp, khoa học, công nghệ, giáo dục, tài chính, quân sự, các chính sách ban hành kiểm soát AI, v.v…cả Siêu Trí Tuệ có thể giúp ích trong việc ngăn chặn sự thống trị của Siêu AI.

Nhưng phát huy mạnh mẽ ‘Lõi Người’ mới là ‘chìa khoá thần’ hay yếu tố quyết định việc Phá Vỡ Sự Thống Trị Siêu AI và giữ được vị thế làm chủ của Nhân Loại. Bởi Siêu AI chỉ ‘sợ’ ‘Lõi Người’ do nó ‘lõi rỗng’.

Cũng như kẻ ác dùng Siêu AI để gây hại xã hội chỉ sợ nhất phần ‘lõi Người’ của người khác thắng ‘lõi rỗng’ bên trong họ.  

Hãy tải đọc ngay tập Tài Liệu Bảo Vệ Nhân Loại Trước Siêu AI sau để thấy rõ:

 

URL to copy links and share this ebook with everyone

(For English):

https://godjenote.com/download/1263/?tmstv=1733503088

☞(For Vietnamese):

https://vn.godjenote.com/download/1268/?tmstv=1733503153

e-Book PDF "Singing LIVE with Jelong Musical Note Combo Breaks The Dominance of Super AI " by Jelong

 

>> Click Here To Download

 

Share This:

Leave a Reply

SPIN TO WIN!

  • Try your lucky to get discount coupon
  • 1 spin per email
  • No cheating
Try Your Lucky
Never
Remind later
No thanks