Categories
-Top News UK News

UK govt to work with AI firms

Sunak meets tech leaders of OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis and Anthropic’s Dario Amodei and discussed the risks AI poses…reports Asian Lite News

Prime Minister Rishi Sunak and the bosses of leading AI companies OpenAI, Google DeepMind, and Anthropic will work together to ensure society benefits from the transformational technology, they said in a statement after meeting on Wednesday.

Sunak and the tech leaders – OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis and Anthropic’s Dario Amodei – discussed the risks AI poses, from disinformation and national security to existential threats, according to the statement.

They also discussed safety measures, voluntary actions that labs are considering to manage the risks, and the possible avenues for international collaboration on AI safety and regulation, it added.

Britain said in March it would split responsibility for governing AI between its existing regulators for human rights, health and safety, and competition, rather than creating a new body dedicated to the technology.

Since the start of 2023, a spate of legal challenges have been initiated against generative AI companies – including Stable Diffusion, Midjourney and the Microsoft-backed Open AI – over alleged breaches of copyright law arising from their use of potentially protected material to train their models.

 The Prime Minister Rishi Sunak meets with Demis Hassabis, CEO DeepMind, Dario Amodei, CEO Anthropic, and Sam Altman, CEO OpenAI, in 10 Downing Street. Picture by Simon Walker / No 10 Downing Street

In his review, Vallance said that enabling generative AI companies in the UK to mine data, text and images would attract investment, support company formation and growth, and show international leadership.

“If the government’s aim is to promote an innovative AI industry in the UK, it should enable mining of available data, text, and images (the input) and utilise existing protections of copyright and IP law on the output of AI. There is an urgent need to prioritise practical solutions to the barriers faced by AI firms in accessing copyright and database materials,” it said.

“To increase confidence and accessibility of protection to copyright holders of their content as permitted by law, we recommend that the government requires the IPO [Intellectual Property Office] to provide clearer guidance to AI firms as to their legal responsibilities, to coordinate intelligence on systematic copyright infringement by AI, and to encourage development of AI tools to help enforce IP rights.”

Responding to the review, the government said the IPO will be tasked with producing a code of practice by summer 2023, “which will provide guidance to support AI firms to access copyrighted work as an input to their models, whilst ensuring there are protections (e.g. labelling) on generated output to support right holders of copyrighted work”.

It added that the IPO will convene a group of AI firms and rights holders to “identify barriers faced by users of data mining techniques”, and that any AI firm which commits to the code “can expect to be able to have a reasonable licence offered by a rights holder in return”.

Meanwhile, UK competition watchdog has fired a shot across the bows of companies racing to commercialise artificial intelligence technology, announcing a review of the sector as fears grow over the spread of misinformation and major disruption in the jobs market.

As pressure builds on global regulators to increase their scrutiny of the technology, the Competition and Markets Authority said it would look at the underlying systems, or foundation models, behind AI tools such as ChatGPT. The initial review, described by one legal expert as a “pre-warning” to the sector, will publish its findings in September.

In the US, the vice-president, Kamala Harris, has invited the chief executives of the leading AI firms ChatGPT, Microsoft and Google-owner Alphabet to the White House on Thursday to discuss how to deal with the safety concerns around the technology.

The Prime Minister Rishi Sunak meets with Demis Hassabis, CEO DeepMind, Dario Amodei, CEO Anthropic, and Sam Altman, CEO OpenAI, in 10 Downing Street. Picture by Simon Walker / No 10 Downing Street

The Federal Trade Commission, which oversees competition in the US, has signalled it is also watching closely, saying this week its staff were “focusing intensely” on how companies might choose to use AI technology, in ways that could have “actual and substantial impact on consumers”. Meanwhile, the Italian data watchdog lifted a temporary ban on ChatGPT last week after OpenAI addressed concerns over data use and privacy.

The government’s outgoing scientific adviser, Sir Patrick Vallance, has urged ministers to “get ahead” of the profound social and economic changes that could be triggered by AI, saying the impact on jobs could be as big as that of the Industrial Revolution.

ChatGPT and Google’s rival Bard service are prone to delivering false information in response to users’ prompts, while the anti-misinformation outfit NewsGuard said this week that chatbots pretending to be journalists were running almost 50 AI-generated “content farms”.

The CMA review will look at how the markets for foundation models could evolve, what opportunities and risks there are for consumers and competition, and formulate “guiding principles” to support competition and protect consumers.

The major players in AI are Microsoft, OpenAI – in which Microsoft is an investor – and Alphabet, which owns a world-leading AI business in UK-based DeepMind, while leading AI startups include Anthropic and Stability AI, the British company behind Stable Diffusion.

ALSO READ-Capitol rioter who put feet on Pelosi’s desk jailed

Leave a Reply

Your email address will not be published. Required fields are marked *