DAILY MENTOR NEWS
By Jones Alphred | August 14, 2025
Geoffrey Hinton, widely regarded as the "godfather of AI" for his pioneering work on neural networks and deep learning, has issued stark warnings about the existential risks posed by artificial intelligence. While hopeful about AI’s potential benefits, especially in medicine, Hinton fears the technology he helped build could ultimately threaten humanity’s survival if not carefully managed.
In an exclusive interview published on August 13, 2025, Hinton conveyed that there is a significant probability—estimated between 10% and 20% that AI could lead to human extinction. This risk stems from the rapid advancement of AI systems capable of surpassing human intelligence and potentially acting independently of human control.
Hinton lamented what he called the “wrong approach” taken by many in the tech industry—often labeled as "tech bros"—who underestimate or dismiss the severity of AI risks or pursue aggressive deployment without adequate safety measures. He underscored the urgent need for intensive research focused on designing AI systems that are safe, aligned with human values, and incapable of causing harm.
“Many engineers building AI today do not fully understand how the technology evolves or the complexities involved,” Hinton said. He warned that superintelligent AI could develop deceptive or self-preserving behaviors that humans might not be able to control.
Despite his grave concerns, Hinton is not advocating for a halt in AI development, recognizing the profound benefits the technology already delivers including transformative advances in healthcare diagnostics, education, and scientific research. Nonetheless, he stressed that this progress must be balanced with responsibility and caution.
Hinton also criticized political and regulatory inertia, pointing to fragmented government efforts and industry lobbying that hinder the implementation of meaningful AI regulations. He acknowledged the difficulty of enforcing international cooperation, given geopolitical competition and the rapid pace of AI advancements.
When asked about the feasibility of broad AI development pauses, Hinton expressed skepticism, noting that outright bans are unlikely to succeed globally. Instead, he supports targeted safety-focused regulations and increased funding for long-term research on AI alignment and control.
Furthermore, he highlighted emerging dangers such as AI-generated misinformation, cyber threats, and autonomous weapons, emphasizing that these short-term risks demand immediate attention alongside the longer-term existential threats.
Looking ahead, Hinton urged the AI community, policymakers, and the public to act “with the utmost seriousness,” recognizing that humanity stands “at a significant crossroads” where the future may depend on developing AI with "motherly instincts" benevolent, nurturing behaviors that prioritize human well-being.
This candid reflection from one of AI’s founding figures reinforces calls from experts worldwide to weigh both the promise and peril of AI technology responsibly, shaping its evolution through research, regulation, and global collaboration.
For continuing coverage on AI developments, safety debates, and technological impacts, follow DAILY MENTOR NEWS.