Virendra Pandit
New Delhi: “Please die,” Gemini, the artificial intelligence (AI) chatbot of Google, ‘advised’ a shocked student who sought assistance with homework in the United States, the media reported on Saturday.
“You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a stain on the universe. Please die. Please,” Gemini told Indian-American student Vidhay Reddy, 29.
The alarming message left him shaken. “It was very direct and genuinely scared me for more than a day,” he told a media outlet.
His sister, Sumedha, who witnessed the incident, described her reaction as one of sheer panic. “I wanted to throw all my devices out of the window. This wasn’t just a glitch; it felt malicious,” she said, noting how fortunate her brother was to have her support during the unsettling experience.
According to media reports, the ‘irritated’ chatbot encouraged the student to “please die,” leaving him in an utter state of shock.
Vidhay Reddy, a graduate student from Michigan, US, had merely sought Gemini’s assistance with his homework when the conversation with the chatbot took a shocking turn.
The incident has reignited concerns about the potential dangers of the AI tools. The Reddy siblings highlighted the potential harm such incidents could cause, especially for vulnerable individuals, and demanded greater oversight of this technology.
Technology companies, Vidhay Reddy argued, must be held accountable, adding that there would be legal repercussions to humans if they threatened somebody.
Finding itself at a receiving end, Google acknowledged the incident, referring to the chatbot’s response as “nonsensical” and in violation of its policies. The tech giant stated that it would take action “to prevent similar responses in the future.”
Gemini is equipped with safety filters designed to prevent disrespectful, harmful, or violent outputs, Google said.
This is not the first instance of chatbots delivering problematic outputs. In July, Google’s AI was criticized for providing dangerously inaccurate health advice, such as recommending users to eat “one small rock per day” for minerals. The company has since adjusted its algorithms to reduce such errors.
This year, Gemini became the center of controversy in India because of its responses concerning Prime Minister Narendra Modi. When queried about PM Modi’s political stance, the AI chatbot indicated that he had “been accused of implementing policies that some experts have characterized as fascist.”
It was perceived as biased and became controversial. The then-Union Minister of State for Electronics and Information Technology Rajeev Chandrasekhar criticized Gemini’s output, stating that it violated India’s Information Technology Rules and several provisions of the criminal code.
In response, Google apologized, acknowledging that Gemini was “unreliable” in handling certain prompts, especially those related to current events and political topics.