The Latest Technology News
In the realm of technological advancements, superintelligence stands as a groundbreaking concept that has the potential to reshape our world in ways we can only begin to imagine. It possesses the power to tackle humanity’s most pressing problems, but it also carries inherent risks that demand our attention.
“Superintelligence will be the most impactful technology humanity has ever invented”
“Superintelligence” refers to an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills. OpenAI’s mission, is to ensure that artificial general intelligence (AGI) benefits all of humanity. AGI would refer to highly autonomous systems that outperform humans at most economically valuable work.
ChatGPT, doesn’t possess superintelligence. While it is a large language model trained on a diverse range of internet text and can generate text based on prompts, it doesn’t actually understand the text it’s generating, nor does it have a consciousness, emotions, or beliefs. It can’t make decisions or plans, and it doesn’t have a worldview or personal experiences. It’s fundamentally a tool that responds based on patterns in the data it was trained on.
As OpenAI continues to develop more advanced versions of AI models like ChatGPT, they strive to remain committed to long-term safety and technical leadership, focusing on ensuring that any influence over AGI’s deployment is used for the benefit of everyone. They also work to avoid enabling uses of AI or AGI that could harm humanity or concentrate power unduly.
As we explore the possibilities and implications of superintelligence, it becomes clear that managing these risks and ensuring its alignment with human values require innovative solutions and careful deliberation. OpenAI has today announced that they are putting together a team to focus on keeping check on this Superintelligence.
We are assembling a team of top machine learning researchers and engineers to work on this problem. We are dedicating 20% of the compute we’ve secured to date over the next four years to solving the problem of superintelligence alignment. Our chief basic research bet is our new Superalignment team, but getting this right is critical to achieve our mission and we expect many teams to contribute, from developing new methods to scaling them up to deployment.
Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence.
To align the first automated alignment researcher, we will need to 1) develop a scalable training method, 2) validate the resulting model, and 3) stress test our entire alignment pipeline:
We expect our research priorities will evolve substantially as we learn more about the problem and we’ll likely add entirely new research areas. We are planning to share more on our roadmap in the future.
Source : OpenAI
Latest Geeky Gadgets Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.