The Chatgpt dietary advice sends man to the hospital with dangerous chemical poisoning

The Chatgpt dietary advice sends man to the hospital with dangerous chemical poisoning

NEWNow you can listen to News articles!

A man who used chatgpt for dietary advice ended up poisoning himself and ended in the hospital.

The 60 -year -old, who sought to eliminate salt from the table from his diet for health reasons, used the large language model (LLM) to obtain suggestions on what to replace it, according to a case study published this week in the annuals of internal medicine.

When ChatgPT suggested exchange sodium chloride (table salt) for sodium bromide, the man replaced for a period of three months, although, said the magazine article, the recommendation probably referred to him for other purposes, such as cleaning.

ChatgPT could recover your brain in silence while experts urge caution for long -term use

Sodium bromide is a chemical compound that resembles salt, but is toxic to human consumption.

Once it was used as anticonvulsive and sedative, but today it is mainly used for cleaning, manufacturing and agricultural purposes, according to the National Health Institutes.

Stofkers can exploit their data from only 1 chatgpt search

A man who used chatgpt for dietary advice ended up poisoning himself and ended in the hospital. (Kurt “Cyberguy” Knutsson)

When the man arrived at the hospital, he informed having experienced fatigue, insomnia, poor coordination, facial acne, cherry angiomas (red protuberances in the skin) and excessive thirst, all the symptoms of bromism, a condition caused by long -term exposure to the sodium bromide.

The man also showed signs of Paranoia, said the case study, since he said that his neighbor was trying to poison him.

Artificial intelligence detects cancer with a 25% higher precision than doctors in the UCLA study

It was also discovered that he had auditory and visual hallucinations, and was finally put on psychiatric waiting after trying to escape.

The man was treated with intravenous fluids and electrolytes, and also put himself in antipsychotic medications. He was discharged from the hospital after three weeks of monitoring.

“This case also highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health results,” the researchers wrote in the case study.

“These are language prediction tools: they lack common sense and will lead to terrible results if the human user does not apply their own common sense.”

“Unfortunately, we do not have access to your chatgpt conversation record and we can never know with certainty what exactly the result it received, since the individual responses are unique and are built from previous entries.”

It is “very unlikely” that a human doctor had mentioned sodium bromide when talking to a patient looking for a substitute for sodium chloride, they said.

The new analysis of artificial intelligence tools face the photos to predict health results

“It is important to consider that chatgpt and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss the results and, ultimately, feed the propagation of erroneous information,” the researchers concluded.

Dr. Jacob Glanville, CEO of Centivax, a San Francisco Biotechnology Company, emphasized that people should not use Chatgpt as a doctor’s substitute.

Man pouring salt into the pot

When Chatgpt suggested exchanging sodium chloride (table salt) for sodium bromide, man, not in the photo, made the replacement for a period of three months. (Istock)

“These are language prediction tools: they lack common sense and will give rise to terrible results if the human user does not apply their own common sense by deciding what to ask in these systems and if you address their recommendations,” said Glanville, who did not participate in the case study, to News Digital.

Click here to get the News application

“This is a classic example of the problem: the system was essentially: ‘Do you want a salt alternative?

Dr. Harvey Castro, a Board Certificate Emergency Medicine Physician And the national speaker on artificial intelligence based in Dallas, confirmed that AI is a tool and not a doctor.

MAN SPOONING Salt

It is “very unlikely” that a human doctor has mentioned sodium bromide when talking to a patient looking for a substitute for sodium chloride, researchers said. (Istock)

“Large language models generate text to predict the most statistically probable words sequence, not when verifying the facts,” Digital News told News.

“The chatgpt bromide error shows why the context is the king in the health councils,” Castro continued. “AI is not a replacement for professional medical judgment, aligning with Operai’s resignations.”

Castro also warned that there is a “regulation gap” when it comes to using LLM for medical information.

“Our terms say that Chatgpt is not intended to use in the treatment of any health condition, and is not a professional advice substitute.”

“FDA prohibitions on bromide do not extend to the IA council: the global supervision of the AI of health is still unproven,” he said.

There is also a risk that LLMS can have data bias and a lack of verification, which leads to hallucinated information.

Click here to register in our health newsletter

“If training data includes obsolete, rare or chemically focused references, the model can superficially in inappropriate contexts, such as bromide as a salt substitute,” Castro said.

“In addition, current LLMs do not have a incorporated cross verification against updated medical databases unless they are explicitly integrated.”

Operai chatgpt application on the App Store website

An expert warned that there is a “regulation gap” when it comes to using large language models to obtain medical information. (Jakub Porzycki/Nurphoto)

To prevent cases like this, Castro requested more safeguards for LLM, such as integrated medical knowledge bases, automated risk flags, contextual indications and a combination of human supervision and AI.

The expert added: “With specific safeguards, LLMs can evolve from risky generalists to safer and more specialized tools; however, without regulation and supervision, rare cases like this will probably be used.”

For more health articles, visit www.Newsnews.com/health

Operai, the chatgpt manufacturer based in San Francisco, provided the following statement to News Digital.

“Our terms say that ChatGPT is not intended to use any health condition, and is not a substitute for professional advice. We have security equipment that works to reduce risks and we have trained our AI systems to encourage people to seek professional guidance.”

Melissa Rudy is a senior health editor and a member of the lifestyle in News Digital. The advice of history can be sent to melissa.rudy@News.com.

Leave a Reply

Your email address will not be published. Required fields are marked *