UK Government Warns of Dangerous AI Systems and Calls for Strict Regulations
The UK government's AI Council warns that powerful artificial-general-intelligence systems may need to be banned due to concerns over safety.
AGI systems, which aim to be as smart or smarter than humans across a broad domain of tasks, require strong transparency and audit requirements and more inbuilt safety technology.
The next six months to a year will require "sensible decisions" on AGI.
Narrow AI, used for specific tasks, could be regulated like existing technology, while AGI systems need different rules.
There is a risk that creating objects as smart as humans could be dangerous, and there should be strong limits on the amount of compute thrown at these systems.
Some argue that concerns around AGI are distracting from existing problems with technology, but Mr Warner believes safety is important for both existing and new technologies.
The UK could find a competitive advantage in encouraging safety.
The UK government's recent White Paper on regulating AI was criticised for failing to set up a dedicated watchdog, but Prime Minister Rishi Sunak has outlined the need for "guardrails." The EU Artificial Intelligence Act is currently under legislative process and is expected to take two to three years to fully come into effect.
However, an industry-led voluntary code of conduct will be developed within weeks.
The US-EU Trader and Technology Council will also work on establishing voluntary codes of conduct, which will be open to a wide universe of countries with similar goals.