Physicist Michio Kaku exposes ‘dangerous’ side of AI chatbots

A famed theoretical physicist has issued a stark warning about the dangers of software like ChatGPT.

Michio Kaku said AI chatbots appear to be intelligent but are only actually capable of spitting out what humans have already written.

The technology, which is free, is unable to detect whether something is false and can therefore be “tricked” into giving the wrong information.

“Even though there is a good aspect to all these software programs, the downside is that you can fabricate, because it can’t tell the difference between what is true and false,” he said in a recent episode of the Joe Rogan Experience.

“They are just instructed to cobble together existing paragraphs, splice them together, polish it up and spit it out. But is it correct? It doesn’t care, and it doesn’t know.”


Michio Kaku
Physicist Michio Kaku has issued a stark warning about the dangers of software like ChatGPT.
PowerfulJRE/Youtube

“A chatbot is like a teenager who plagiarises and passes things off as their own.”

However, Kaku said that there was a possibility that quantum computing (which uses atoms instead of microchips) could be adapted in future to act as a fact checker.

Kaku believes the power of quantum computing could eradicate the issues presented by consumer-tier chatbots.

“When they get together, watch out,” he said.


ChatGPT
Kaku said AI chatbots appear to be intelligent but are only actually capable of spitting out what humans have already written.
Getty Images

“Quantum computers can act as a fact checker. You can ask it to remove all the garbage from articles. So the hardware may act as a check for all the wild statements made by the software.”

Kaku’s warning came after Geoffrey Hinton, an AI pioneer known as the “godfather of artificial intelligence”, announced his resignation from Google, citing growing concerns about the potential dangers of artificial intelligence.

He said AI systems like GPT-4 already eclipse humans in terms of general knowledge and could soon surpass them in reasoning ability as well.

In a few short months of it being available people have already used the service to generate income.


Joe Rogan
Kaku’s warning is that AI is unable to detect whether something is false and can therefore be “tricked” into giving the wrong information.
PowerfulJRE/Youtube

Hinton described the “existential risk” AI poses to modern life, highlighting the possibility for corrupt leaders to interfere with democracy.

He also expressed concern about the potential for “bad actors” to misuse AI technology, such as Russian President Vladimir Putin giving robots autonomy that could lead to dangerous outcomes.

“Right now, what we’re seeing is things like GPT-4 eclipse a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning,” he said in a recent interview aired by the BBC.

“And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”


Source by [New York Post]

Leave a Reply