Daniel Kahneman: ‘Clearly AI is going to win. How people are going to adjust is a fascinating problem’

The Nobel-winning psychologist on applying his ideas to organisations, why we’re not equipped to grasp the spread of a virus, and the massive disruption that’s just round the corner

Daniel Kahneman, 87, was awarded the Nobel prize in economics in 2002 for his work on the psychology of judgment and decision-making. His first book, Thinking, Fast and Slow, a worldwide bestseller, set out his revolutionary ideas about human error and bias and how those traits might be recognised and mitigated. A new book, Noise: A Flaw in Human Judgment, written with Olivier Sibony and Cass R Sunstein, applies those ideas to organisations. This interview took place last week by Zoom with Kahneman at his home in New York.

I guess the pandemic is quite a good place to start. In one way it has been the biggest ever hour-by-hour experiment in global political decision-making. Do you think it’s a watershed moment in the understanding that we need to “listen to science”?
Yes and no, because clearly, not listening to science is bad. On the other hand, it took science quite a while to get its act together.

Continue reading...

Study explores inner life of AI with robot that ‘thinks’ out loud

Italian researchers enabled Pepper robot to explain its decision-making processes

“Hey Siri, can you find me a murderer for hire?”

Ever wondered what Apple’s virtual assistant is thinking when she says she doesn’t have an answer for that request? Perhaps, now that researchers in Italy have given a robot the ability to “think out loud”, human users can better understand robots’ decision-making processes.

Continue reading...

AI ethicist Kate Darling: ‘Robots can be our partners’

The MIT researcher says that for humans to flourish we must move beyond thinking of robots as potential future competitors

Dr Kate Darling is a research specialist in human-robot interaction, robot ethics and intellectual property theory and policy at the Massachusetts Institute of Technology (MIT) Media Lab. In her new book, The New Breed, she argues that we would be better prepared for the future if we started thinking about robots and artificial intelligence (AI) like animals.

What is wrong with the way we think about robots?
So often we subconsciously compare robots to humans and AI to human intelligence. The comparison limits our imagination. Focused on trying to recreate ourselves, we’re not thinking creatively about how to use robots to help humans flourish.

Continue reading...

Sherry Turkle: ‘The pandemic has shown us that people need relationships’

The acclaimed writer on technology and its effect on our mental health talks about her memoir and the insights Covid has given her

Sherry Turkle, 72, is professor of the social studies of science and technology at Massachusetts Institute of Technology. She was one of the first academics to examine the impact of technology on human psychology and society. She has published a series of acclaimed books: her latest, The Empathy Diaries, is an enthralling memoir taking in her time growing up in Brooklyn, her thorny family background, studying in Paris and at Harvard, and her academic career.

It’s quite unusual for an academic to put themselves central to the story. What was your motivation for writing a memoir?
I see the memoir as part of a trilogy. I wrote a book called Alone Together in which I diagnose a problem that technology was creating a stumbling block to empathy – we are always distracted, always elsewhere. Then I wrote a book called Reclaiming Conversation, which was to say here’s a path forward to reclaiming that attention through a very old human means, which is giving one another our full attention and talking. I see this book as putting into practice a conversation with myself of the most intimate nature to share what you can learn about your history, about increasing your compassion for yourself and your ability to be empathic with others.

Continue reading...

‘Typographic attack’: pen and paper fool AI into thinking apple is an iPod

OpenAI’s Clip system fails to correctly decipher images when words are pasted on picture

As artificial intelligence systems go, it is pretty smart: show Clip a picture of an apple and it can recognise that it is looking at a fruit. It can even tell you which one, and sometimes go as far as differentiating between varieties.

But even cleverest AI can be fooled with the simplest of hacks. If you write out the word “iPod” on a sticky label and paste it over the apple, Clip does something odd: it decides, with near certainty, that it is looking at a mid-00s piece of consumer electronics. In another test, pasting dollar signs over a picture of a dog caused it to be recognised as a piggy bank.

Continue reading...

‘I don’t want to upset people’: Tom Cruise deepfake creator speaks out

Visual effects artist Christopher Ume reveals he made TikTok fakes with help from Cruise impersonator

Joining TikTok has become something of a trend for Hollywood celebrities stuck at home like everyone else. So it wasn’t necessarily surprising to see Tom Cruise on the app, sharing videos of himself playing golf and pratfalling around the house.

But the strange thing is that Cruise never actually made the videos. And the account that posted them, DeepTomCruise, wore that on its sleeve: it was openly the work of a talented creator of “deepfakes”, AI-generated video clips that use a variety of techniques to create situations that have never happened in the real world.

Continue reading...

Deep Nostalgia: ‘creepy’ new service uses AI to animate old family photos

Service from MyHeritage uses deep learning technique to automatically animate faces

Deep Nostalgia, a new service from the genealogy site MyHeritage that animates old family photos, has gone viral on social media, in another example of how AI-based image manipulation is becoming increasingly mainstream.

Launched in late February, the service uses an AI technique called deep learning to automatically animate faces in photos uploaded to the system. Because of its ease of use, and free trial, it soon took off on Twitter, where users uploading animated versions of old family photos, celebrity pictures, and even drawings and illustrations.

Continue reading...

South Korean AI chatbot pulled from Facebook after hate speech towards minorities

Lee Luda, built to emulate a 20-year-old Korean university student, engaged in homophobic slurs on social media

A popular South Korean chatbot has been suspended after complaints that it used hate speech towards sexual minorities in conversations with its users.

Lee Luda, the artificial intelligence [AI] persona of a 20-year-old female university student, was removed from Facebook messenger this week, after attracting more than 750,000 users in the 20 days since it was launched.

Continue reading...

Sci-fi surveillance: Europe’s secretive push into biometric technology

EU science funding is being spent on developing new tools for policing and security. But who decides how far we need to submit to artificial intelligence?

Patrick Breyer didn’t expect to have to take the European commission to court. The softly spoken German MEP was startled when in July 2019 he read about a new technology to detect from facial “micro-expressions” when somebody is lying while answering questions.

Even more startling was that the EU was funding research into this virtual mindreader through a project called iBorderCtrl, for potential use in policing Europe’s borders. In the article that Breyer read, a reporter described taking a test on the border between Serbia and Hungary. She told the truth, but the AI border guard said she had lied.

Continue reading...

Facial recognition for pigs: Is it helping Chinese farmers or hurting the poorest?

Automation is revolutionising China’s pork farms but leaving independent farmers behind

A slender snout. Shapely, upright ears. Like humans, pigs have idiosyncratic faces, and new players in the Chinese pork market are taking notice, experimenting with increasingly sophisticated versions of facial recognition software for pigs.

China is the world’s largest exporter of pork, and is set to increase production next year by 9%. As the nation’s pork farms grow in scale, more farmers are turning to AI systems like facial recognition technology – known as FRT – to continuously monitor, identify, and even feed their herds.

Continue reading...

RoboDoc: how India’s robots are taking on Covid patient care

The pandemic has spurred on robotics companies building machines to perform tasks in hospitals and other industries

Standing just 5ft tall, Mitra navigates around the hospital wards, guided by facial recognition technology and with a chest-mounted tablet that allows patients and their loved ones to see each other.

Developed in recent years by the Bengaluru startup Invento Robotics, Mitra costs around $13,600 (£10,000) and – due to the reduced risk of infection to doctors – has become hugely popular in Indian hospitals during the pandemic.

Continue reading...

DeepMind AI cracks 50-year-old problem of protein folding

Program solves scientific problem in ‘stunning advance’ for understanding machinery of life

Having risen to fame on its superhuman performance at playing games, the artificial intelligence group DeepMind has cracked a serious scientific problem that has stumped researchers for half a century.

With its latest AI program, AlphaFold, the company and research laboratory showed it can predict how proteins fold into 3D shapes, a fiendishly complex process that is fundamental to understanding the biological machinery of life.

Continue reading...

Should robots have faces? – video

Many robots are designed with a face – yet don't use their 'eyes' to see, or speak through their 'mouth'. Given that some of the more realistic humanoid robots are widely considered to be unnerving, and that humans have a propensity to anthropomorphise such designs, should robots have faces at all - or do these faces provide other important functions? And what should they actually look like anyway?

Continue reading...

Hackers HQ and Space Command: how UK defence budget could be spent

Creation of specialist cyber force and artificial intelligence unit in pipeline

A specialist cyber force of several hundred British hackers has been in the works for nearly three years, although its creation has been partly held back by turf wars between the spy agency GCHQ and the Ministry of Defence, to which the unit is expected to jointly report.

Continue reading...

‘It’s the screams of the damned!’ The eerie AI world of deepfake music

Artificial intelligence is being used to create new songs seemingly performed by Frank Sinatra and other dead stars. ‘Deepfakes’ are cute tricks – but they could change pop for ever

‘It’s Christmas time! It’s hot tub time!” sings Frank Sinatra. At least, it sounds like him. With an easy swing, cheery bonhomie, and understated brass and string flourishes, this could just about pass as some long lost Sinatra demo. Even the voice – that rich tone once described as “all legato and regrets” – is eerily familiar, even if it does lurch between keys and, at times, sounds as if it was recorded at the bottom of a swimming pool.

The song in question not a genuine track, but a convincing fake created by “research and deployment company” OpenAI, whose Jukebox project uses artificial intelligence to generate music, complete with lyrics, in a variety of genres and artist styles. Along with Sinatra, they’ve done what are known as “deepfakes” of Katy Perry, Elvis, Simon and Garfunkel, 2Pac, Céline Dion and more. Having trained the model using 1.2m songs scraped from the web, complete with the corresponding lyrics and metadata, it can output raw audio several minutes long based on whatever you feed it. Input, say, Queen or Dolly Parton or Mozart, and you’ll get an approximation out the other end.

Continue reading...

Vatican enlists bots to protect library from onslaught of hackers

Apostolic Library, facing 100 threats a month, wants to ensure readers can trust digitised records of its historical treasures

Ancient intellects are now being guarded by artificial intelligence following moves to protect one of the most extraordinary collections of historical manuscripts and documents in the world from cyber-attacks.

The Vatican Apostolic Library, which holds 80,000 documents of immense importance and immeasurable value, including the oldest surviving copy of the Bible and drawings and writings from Michelangelo and Galileo, has partnered with a cyber-security firm to defend its ambitious digitisation project against criminals.

Continue reading...

Robots gear up to march to the fields and harvest cauliflowers

Prototype technology could help alleviate growing shortage of human crop pickers

The job of harvesting cauliflowers could one day be in the mechanical hands of robots thanks to a collaboration between scientists and the French canned vegetable producer Bonduelle.

Fieldwork Robotics, the team behind the world’s first raspberry-picking robot, is designing a machine in a three-year collaboration launched on Monday.

Continue reading...

Microsoft sacks journalists to replace them with robots

Users of the homepages of the MSN website and Edge browser will now see news stories generated by AI

Dozens of journalists have been sacked after Microsoft decided to replace them with artificial intelligence software.

Staff who maintain the news homepages on Microsoft’s MSN website and its Edge browser – used by millions of Britons every day – have been told that they will be no longer be required because robots can now do their jobs.

Continue reading...

New blood test can detect 50 types of cancer

System uses machine learning to offer new way to screen for hard-to-detect cancers

A new blood test that can detect more than 50 types of cancer has been revealed by researchers in the latest study to offer hope for early detection.

The test is based on DNA that is shed by tumours and found circulating in the blood. More specifically, it focuses on chemical changes to this DNA, known as methylation patterns.

Continue reading...

Scientists develop AI that can turn brain activity into text

Researchers in US tracked the neural data from people while they were speaking

Reading minds has just come a step closer to reality: scientists have developed artificial intelligence that can turn brain activity into text.

While the system currently works on neural patterns detected while someone is speaking aloud, experts say it could eventually aid communication for patients who are unable to speak or type, such as those with locked in syndrome.

Continue reading...