Can AI image generators be policed to prevent explicit deepfakes of children?

As one of the largest ‘training’ datasets has been found to contain child sexual abuse material, can bans on creating such imagery be feasible?

Child abusers are creating AI-generated “deepfakes” of their targets in order to blackmail them into filming their own abuse, beginning a cycle of sextortion that can last for years.

Creating simulated child abuse imagery is illegal in the UK, and Labour and the Conservatives have aligned on the desire to ban all explicit AI-generated images of real people.

Continue reading...

New bill would force AI companies to reveal use of copyrighted art

Adam Schiff introduces bill amid growing legal battle over whether major AI companies have made illegal use of copyrighted works

A bill introduced in the US Congress on Tuesday intends to force artificial intelligence companies to reveal the copyrighted material they use to make their generative AI models. The legislation adds to a growing number of attempts from lawmakers, news outlets and artists to establish how AI firms use creative works like songs, visual art, books and movies to train their software–and whether those companies are illegally building their tools off copyrighted content.

The California Democratic congressman Adam Schiff introduced the bill, the Generative AI Copyright Disclosure Act, which would require that AI companies submit any copyrighted works in their training datasets to the Register of Copyrights before releasing new generative AI systems, which create text, images, music or video in response to users’ prompts. The bill would need companies to file such documents at least 30 days before publicly debuting their AI tools, or face a financial penalty. Such datasets encompass billions of lines of text and images or millions of hours of music and movies.

Continue reading...

Australian news media could seek payment from Meta for content used to train AI

News media bargaining code could apply to tech companies using massive amounts of online information for generative AI, researchers say

Australian media companies could seek compensation from Meta for its use of online news sources in training generative AI technology, researchers have said.

When Meta announced last week that it would not sign new deals to pay for news in Australia for use on Facebook, it downplayed the value of news to its services, stating that just 3% of Facebook usage in Australia was related to news.

Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup

Continue reading...

AI firm considers banning creation of political images for 2024 elections

Midjourney’s CEO David Holz says company close to ‘hammering’ images of Donald Trump, Joe Biden and others ‘for next 12 months’

The groundbreaking artificial intelligence image-generating company Midjourney is considering banning people from using its software to make political images of Joe Biden and Donald Trump as part of an effort to avoid being used to distract from or misinform about the 2024 US presidential election.

“I don’t know how much I care about political speech for the next year for our platform,” Midjourney’s CEO, David Holz, said last week, adding that the company is close to “hammering” – or banning – political images, including those of the leading presidential candidates, “for the next 12 months”.

Continue reading...

OpenAI bans bot impersonating US presidential candidate Dean Phillips

Company removes account of developer saying ChatGPT bot violated policies on political campaigning

OpenAI has removed the account of the developer behind an artificial intelligence-powered bot impersonating the US presidential candidate Dean Phillips, saying it violated company policy.

Phillips, who is challenging Joe Biden for the Democratic party candidacy, was impersonated by a ChatGPT-powered bot on the dean.bot site.

Continue reading...

New York Times sues OpenAI and Microsoft for copyright infringement

Lawsuit says companies gave NYT content ‘particular emphasis’ and ‘seek to free-ride’ on paper’s investment in its journalism

The New York Times has sued OpenAI and Microsoft over the use of its content to train generative artificial intelligence and large-language model systems, a move that could see the company receive billions of dollars in damages.

The copyright infringement lawsuit, filed in a Manhattan federal court on Wednesday, claims that while the companies copied information from many sources to build their systems, they give New York Times content “particular emphasis” and “seek to free-ride on the Times’s massive investment in its journalism by using it to build substitutive products without permission or payment”.

Continue reading...

Microsoft to join OpenAI’s board after Sam Altman rehired as CEO

Altman says tech giant, which owns 49% of ChatGPT maker after investing $13bn, will take non-voting, observer position on board

Microsoft will take a non-voting, observer position on OpenAI’s board, CEO Sam Altman said in his first official missive after taking back the reins of the company on Wednesday.

The observer position means Microsoft’s representative can attend OpenAI’s board meetings and access confidential information, but it does not have voting rights on matters including electing or choosing directors.

Continue reading...

OpenAI ‘was working on advanced model so powerful it alarmed staff’

Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking

OpenAI was reportedly working on an advanced system before Sam Altman’s sacking that was so powerful it caused safety concerns among staff at the company.

The artificial intelligence model triggered such alarm with some OpenAI researchers that they wrote to the board of directors before Altman’s dismissal warning it could threaten humanity, Reuters reported.

Continue reading...

Thursday briefing: What the meltdown at OpenAI means for the future of artificial intelligence

In today’s newsletter: Why Sam Altman was fired as the company’s CEO, why he was rehired – and what it all means for the field

Sign up here for our daily newsletter, First Edition

Good morning. Boardroom coups are always pretty absorbing; a boardroom coup at a company that has set itself the lofty goal of “building artificial general intelligence (AGI) that is safe and benefits all of humanity” is more absorbing still.

The removal of chief executive Sam Altman from OpenAI, which made the artificial intelligence chatbot ChatGPT, has therefore been the subject of breathless attention since it was announced last Friday. Now Altman has got his job back, and there’s even more to get your head around.

Israel-Hamas war | Israeli officials have said a four-day Gaza truce and hostage release will not start until at least Friday, stalling a breakthrough deal to pause the bloody seven-week-old war and thwarting the hopes of families that some captives would be freed on Thursday.

Autumn statement | Jeremy Hunt sought to blunt the impact of the highest levels of taxation since the second world war with a cut in workers’ national insurance contributions, fuelling speculation about a snap spring general election. The chancellor used a fresh squeeze on public spending to pay for a reduction in NICs worth £450 a year to the average employee. See the key points and what it means for you.

Netherlands | Geert Wilders’ far-right, anti-Islam Party for Freedom (PVV) is on course to be the largest party in the Dutch parliament, according to exit polls, in a major electoral upset whose reverberations will be felt around Europe. The PVV was predicted to win 35 seats in the 150-seat parliament, but may not be able to form a governing coalition.

Covid inquiry | Prof Sir Jonathan Van-Tam and his family were advised by police to move out of their home during the pandemic because of a threat that they would have their throats cut, he has told the Covid inquiry. Van-Tam said that he feared the scale of such abuse would put people off taking up similar roles in the future.

US and Canada | Four border crossings between the US and Canada were closed on Wednesday after a vehicle exploded at a checkpoint on a bridge near Niagara Falls, reportedly killing two people. Governor of New York Kathy Hochul said there was “no indication of a terrorist attack”.

Continue reading...

Who is Helen Toner the Australian woman ousted from the board of OpenAI?

Sam Altman and Toner reportedly discussed a paper she had written criticising the timing of OpenAI’s release of ChatGPT shortly before Altman was fired

After a tumultuous few days at OpenAI, Sam Altman has returned to the helm. But who is the young Australian board member who was reportedly in dispute with the chief executive in the lead up to his firing?

Helen Toner, along with two of the other three board members responsible for firing Altman less than a week ago, is now off the board of OpenAI.

Sign up for Guardian Australia’s free morning and afternoon email newsletters for your daily news roundup

Continue reading...

Sam Altman’s OpenAI exit leads to rollercoaster for sector

Firing of industry figurehead led to rebellion at his former employer and his hiring by its rival Microsoft


The blog headline was anodyne – “OpenAI announces leadership transition” – but the consequences for Silicon Valley were seismic.

On Friday the company behind the hit AI text-generating system ChatGPT announced that Sam Altman, figurehead for the business and the artificial intelligence revolution that has enthralled and alarmed the world in equal measure, had been fired as chief executive.

Continue reading...

Ousted OpenAI CEO Sam Altman ‘in talks to return at firm’s HQ’

Boss was sacked by ChatGPT developer over failure to be ‘candid in his communications’

Sam Altman is being lined up for a surprise return as the chief executive of the ChatGPT developer OpenAI amid pressure from investors to reverse his shock ousting.

Altman was fired by the company board on Friday, citing a failure to be “candid in his communications”, in a move that startled Silicon Valley.

Continue reading...

Tech leaders agree on AI regulation but divided on how in Washington forum

Bill Gates, Sundar Pichai, Sam Altman and others gathered for ‘one of the most important conversations of the year’

A delegation of top tech leaders including Sundar Pichai, Elon Musk, Mark Zuckerberg and Sam Altman convened in Washington on Wednesday for a closed-door meeting with US senators to discuss the rise of artificial intelligence and how it should be regulated.

The discussion, billed as an “AI safety forum”, is one of several meetings between Silicon Valley, researchers, labor leaders and government and is taking on fresh urgency with the US elections looming and the rapid pace of AI advancement already affecting peoples’ lives and work.

Continue reading...

New cryptocurrency offers users tokens for scanning their eyeballs

Worldcoin, launched by CEO of ChatGPT developer OpenAI, says scheme will distinguish between ‘verified humans’ and AI

Members of the public are being invited to have their eyeballs scanned by a silver orb as part of cryptocurrency project that aims to use biometric verification to distinguish humans from AI systems.

People signing up to the Worldcoin scheme via an app this week will receive a “genesis grant” of 25 tokens, equivalent to about £40, after having their iris scanned by one of the bowling ball-sized devices.

Continue reading...

Letter signed by Elon Musk demanding AI research pause sparks controversy

The statement has been revealed to have false signatures and researchers have condemned its use of their work

A letter co-signed by Elon Musk and thousands of others demanding a pause in artificial intelligence research has created a firestorm, after the researchers cited in the letter condemned its use of their work, some signatories were revealed to be fake, and others backed out on their support.

On 22 March more than 1,800 signatories – including Musk, the cognitive scientist Gary Marcus, and Apple co-founder Steve Wozniak – called for a six-month pause on the development of systems “more powerful” than that of GPT-4. Engineers from Amazon, DeepMind, Google, Meta and Microsoft also lent their support.

Reuters contributed to this report.

Continue reading...