Would AGI Make Money Obsolete? OpenAI Issues Chilling Warning

The revelation comes in the backdrop of OpenAI CEO Sam Altman's recent statement, where he claimed that the world was in an AI bubble.

Advertisement
Read Time: 3 mins
OpenAI CEO Sam Altman said theres a bubble forming in the AI industry.
Quick Read
Summary is AI-generated, newsroom-reviewed
  • OpenAI warns human-level AI may render money obsolete in the future
  • Investing in OpenAI Global, LLC is high-risk with possible total capital loss
  • SoftBank leads funding round valuing OpenAI at $300 billion
Did our AI summary help?
Let us know.

While billions are being spent on the development of artificial intelligence (AI) technology across the globe, OpenAI, the creator of ChatGPT has warned its investors that human-level AI, popularly referred to as Artificial General Intelligence (AGI), may make money obsolete in the future.

"It may be difficult to know what role money will play in a post-AGI world," the company warns on its website.

"Investing in OpenAI global, LLC is a high-risk investment. Investors could lose their capital contribution and not see any return," it adds.

The revelation comes in the backdrop of OpenAI CEO Sam Altman's recent statement, where he claimed that the world was in an AI bubble.

"When bubbles happen, smart people get overexcited about a kernel of truth," said Mr Altman, adding: "Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes. Is AI the most important thing to happen in a very long time? My opinion is also yes."

Chasing the bubble, SoftBank is leading a new funding round for OpenAI at a $300 billion valuation, according to a report in Business Insider.

"Simultaneously, current and former employees are selling $6 billion of stock at a $500 billion valuation, with SoftBank again involved," the report highlighted.

What is AGI?

AGI takes AI a step further. While AI is task-specific, AGI aims to possess intelligence that can be applied across a wide range of tasks, similar to human intelligence. In essence, AGI would be a machine with the ability to understand, learn, and apply knowledge in diverse domains, much like a human being.

Also Read | '500 Push-Ups For Love': Ex-Army Officer's Handwritten Letter From Girlfriend Goes Viral

Dangers of AGI

In June, the United Nations Council of Presidents of the General Assembly (UNCPGA) released a report seeking global coordination to deal with the perils of AGI, which could become a reality in the coming years.

The report highlighted that though AGI could "accelerate scientific discoveries related to public health" and transform many industries, its downside could not be ignored.

Advertisement

"Unlike traditional AI, AGI could autonomously execute harmful actions beyond human oversight, resulting in irreversible impacts, threats from advanced weapon systems, and vulnerabilities in critical infrastructures. We must ensure these risks are mitigated if we want to reap the extraordinary benefits of AGI," the report stated.

In February, Demis Hassabis, CEO of Google DeepMind stated that AGI will start to emerge in the next five or 10 years. He also batted for a UN-like umbrella organisation to oversee AGI's development.

Featured Video Of The Day
On CCTV, Bengal Teacher Slapped, Punched After He Protests Public Drinking
Topics mentioned in this article