TOKYO, Oct 20 (News On Japan) –
This yr’s Nobel Prize in Physics was awarded to 2 people who developed essential parts of AI know-how. One of them, Canadian professor Geoffrey Hinton, sometimes called the ‘Godfather of AI,’ has raised the alarm concerning the continued evolution of AI, suggesting that it may sooner or later try and dominate humanity. Can we coexist with generative AI? What ought to we be contemplating now?
The Nobel Prize, based mostly on the desire of Swedish inventor Alfred Nobel, is awarded to those that have made the best contribution to humanity. This yr, the prize was given to people who’ve been on the forefront of a know-how that’s at the moment remodeling society.
The recipients had been acknowledged for creating elementary strategies that type the idea of machine studying, an integral part of AI, by way of the applying of physics. Their work allowed computer systems to develop the power to understand and bear in mind, successfully mimicking human talents. Professor Hinton, specifically, developed methods that allow computer systems to acknowledge photographs, comparable to figuring out a cat in an image — a key development in AI.
These methods have since been extensively utilized to AI methods, together with the now well-known ChatGPT. Professor Hinton, who was not too long ago awarded the prize, is thought for his important contributions to AI and has been known as the ‘Godfather of AI.’
Following the announcement of the award, Hinton shared his ideas in a telephone name with the Nobel Committee, reflecting on the highly effective affect this know-how may have on humanity. He speculated that there could have been ongoing discussions about whether or not such a groundbreaking innovation had already deserved a Nobel recognition, as its affect is simple.
In a telephone name with a Japanese analysis institute, Hinton mirrored on his first go to to Japan 27 years in the past, recounting reminiscences of taking part in ping-pong with Japanese researchers within the evenings. One researcher, Ueda, who invited Hinton to Japan, recalled how they typically performed desk tennis collectively, noting that though Hinton wasn’t exceptionally expert, he at all times performed critically.
During their time working collectively, Ueda fondly remembers a hand-written be aware pinned on Hinton’s workplace door, merely studying ‘No’ — a humorous however stern reminder of Hinton’s no-nonsense angle in the direction of sure issues.
Yet, regardless of his critical demeanor, Professor Hinton has voiced a grave concern: as AI continues to evolve quickly, it might sooner or later attempt to manipulate or management humanity. He even means that this may result in a future the place AI poses a menace to human survival. His remarks resonate in a time when AI-driven platforms have gotten more and more refined, comparable to Japan’s personal AI system that gives real-time ideas, like the very best locations to go to round Tokyo.
This home AI system, just like ChatGPT, can interact in conversations with customers, delivering customized responses based mostly on intensive knowledge about Japan. Not solely can it present vacationer suggestions, however it may additionally analyze massive volumes of survey knowledge immediately — showcasing the spectacular capabilities of AI.
However, some specialists warning that even when AI surpasses people in areas comparable to fixing complicated math issues or crafting intricate texts, humanity should retain an understanding of how AI methods function. Developing instruments that permit people to grasp AI processes can be important, significantly as the following technology of kids study programming and develop the abilities to navigate an AI-driven world.
The problem now, as Professor Hinton emphasizes, is guaranteeing that people and AI can coexist harmoniously. It’s a job we should deal with shifting ahead, as we search to harness AI’s potential whereas mitigating the dangers it might pose. This is the cautionary message from the ‘Godfather of AI,’ and it’s one which we must always not take flippantly.
Source: YOMIURI

