Sherry Turkle: ‘The pandemic has shown us that people need relationships’

The acclaimed writer on technology and its effect on our mental health talks about her memoir and the insights Covid has given her

Sherry Turkle, 72, is professor of the social studies of science and technology at Massachusetts Institute of Technology. She was one of the first academics to examine the impact of technology on human psychology and society. She has published a series of acclaimed books: her latest, The Empathy Diaries, is an enthralling memoir taking in her time growing up in Brooklyn, her thorny family background, studying in Paris and at Harvard, and her academic career.

It’s quite unusual for an academic to put themselves central to the story. What was your motivation for writing a memoir?
I see the memoir as part of a trilogy. I wrote a book called Alone Together in which I diagnose a problem that technology was creating a stumbling block to empathy – we are always distracted, always elsewhere. Then I wrote a book called Reclaiming Conversation, which was to say here’s a path forward to reclaiming that attention through a very old human means, which is giving one another our full attention and talking. I see this book as putting into practice a conversation with myself of the most intimate nature to share what you can learn about your history, about increasing your compassion for yourself and your ability to be empathic with others.

Continue reading...

‘Typographic attack’: pen and paper fool AI into thinking apple is an iPod

OpenAI’s Clip system fails to correctly decipher images when words are pasted on picture

As artificial intelligence systems go, it is pretty smart: show Clip a picture of an apple and it can recognise that it is looking at a fruit. It can even tell you which one, and sometimes go as far as differentiating between varieties.

But even cleverest AI can be fooled with the simplest of hacks. If you write out the word “iPod” on a sticky label and paste it over the apple, Clip does something odd: it decides, with near certainty, that it is looking at a mid-00s piece of consumer electronics. In another test, pasting dollar signs over a picture of a dog caused it to be recognised as a piggy bank.

Continue reading...

‘I don’t want to upset people’: Tom Cruise deepfake creator speaks out

Visual effects artist Christopher Ume reveals he made TikTok fakes with help from Cruise impersonator

Joining TikTok has become something of a trend for Hollywood celebrities stuck at home like everyone else. So it wasn’t necessarily surprising to see Tom Cruise on the app, sharing videos of himself playing golf and pratfalling around the house.

But the strange thing is that Cruise never actually made the videos. And the account that posted them, DeepTomCruise, wore that on its sleeve: it was openly the work of a talented creator of “deepfakes”, AI-generated video clips that use a variety of techniques to create situations that have never happened in the real world.

Continue reading...

Deep Nostalgia: ‘creepy’ new service uses AI to animate old family photos

Service from MyHeritage uses deep learning technique to automatically animate faces

Deep Nostalgia, a new service from the genealogy site MyHeritage that animates old family photos, has gone viral on social media, in another example of how AI-based image manipulation is becoming increasingly mainstream.

Launched in late February, the service uses an AI technique called deep learning to automatically animate faces in photos uploaded to the system. Because of its ease of use, and free trial, it soon took off on Twitter, where users uploading animated versions of old family photos, celebrity pictures, and even drawings and illustrations.

Continue reading...

South Korean AI chatbot pulled from Facebook after hate speech towards minorities

Lee Luda, built to emulate a 20-year-old Korean university student, engaged in homophobic slurs on social media

A popular South Korean chatbot has been suspended after complaints that it used hate speech towards sexual minorities in conversations with its users.

Lee Luda, the artificial intelligence [AI] persona of a 20-year-old female university student, was removed from Facebook messenger this week, after attracting more than 750,000 users in the 20 days since it was launched.

Continue reading...

Sci-fi surveillance: Europe’s secretive push into biometric technology

EU science funding is being spent on developing new tools for policing and security. But who decides how far we need to submit to artificial intelligence?

Patrick Breyer didn’t expect to have to take the European commission to court. The softly spoken German MEP was startled when in July 2019 he read about a new technology to detect from facial “micro-expressions” when somebody is lying while answering questions.

Even more startling was that the EU was funding research into this virtual mindreader through a project called iBorderCtrl, for potential use in policing Europe’s borders. In the article that Breyer read, a reporter described taking a test on the border between Serbia and Hungary. She told the truth, but the AI border guard said she had lied.

Continue reading...

Facial recognition for pigs: Is it helping Chinese farmers or hurting the poorest?

Automation is revolutionising China’s pork farms but leaving independent farmers behind

A slender snout. Shapely, upright ears. Like humans, pigs have idiosyncratic faces, and new players in the Chinese pork market are taking notice, experimenting with increasingly sophisticated versions of facial recognition software for pigs.

China is the world’s largest exporter of pork, and is set to increase production next year by 9%. As the nation’s pork farms grow in scale, more farmers are turning to AI systems like facial recognition technology – known as FRT – to continuously monitor, identify, and even feed their herds.

Continue reading...

RoboDoc: how India’s robots are taking on Covid patient care

The pandemic has spurred on robotics companies building machines to perform tasks in hospitals and other industries

Standing just 5ft tall, Mitra navigates around the hospital wards, guided by facial recognition technology and with a chest-mounted tablet that allows patients and their loved ones to see each other.

Developed in recent years by the Bengaluru startup Invento Robotics, Mitra costs around $13,600 (£10,000) and – due to the reduced risk of infection to doctors – has become hugely popular in Indian hospitals during the pandemic.

Continue reading...

DeepMind AI cracks 50-year-old problem of protein folding

Program solves scientific problem in ‘stunning advance’ for understanding machinery of life

Having risen to fame on its superhuman performance at playing games, the artificial intelligence group DeepMind has cracked a serious scientific problem that has stumped researchers for half a century.

With its latest AI program, AlphaFold, the company and research laboratory showed it can predict how proteins fold into 3D shapes, a fiendishly complex process that is fundamental to understanding the biological machinery of life.

Continue reading...

Should robots have faces? – video

Many robots are designed with a face – yet don't use their 'eyes' to see, or speak through their 'mouth'. Given that some of the more realistic humanoid robots are widely considered to be unnerving, and that humans have a propensity to anthropomorphise such designs, should robots have faces at all - or do these faces provide other important functions? And what should they actually look like anyway?

Continue reading...

Hackers HQ and Space Command: how UK defence budget could be spent

Creation of specialist cyber force and artificial intelligence unit in pipeline

A specialist cyber force of several hundred British hackers has been in the works for nearly three years, although its creation has been partly held back by turf wars between the spy agency GCHQ and the Ministry of Defence, to which the unit is expected to jointly report.

Continue reading...

‘It’s the screams of the damned!’ The eerie AI world of deepfake music

Artificial intelligence is being used to create new songs seemingly performed by Frank Sinatra and other dead stars. ‘Deepfakes’ are cute tricks – but they could change pop for ever

‘It’s Christmas time! It’s hot tub time!” sings Frank Sinatra. At least, it sounds like him. With an easy swing, cheery bonhomie, and understated brass and string flourishes, this could just about pass as some long lost Sinatra demo. Even the voice – that rich tone once described as “all legato and regrets” – is eerily familiar, even if it does lurch between keys and, at times, sounds as if it was recorded at the bottom of a swimming pool.

The song in question not a genuine track, but a convincing fake created by “research and deployment company” OpenAI, whose Jukebox project uses artificial intelligence to generate music, complete with lyrics, in a variety of genres and artist styles. Along with Sinatra, they’ve done what are known as “deepfakes” of Katy Perry, Elvis, Simon and Garfunkel, 2Pac, Céline Dion and more. Having trained the model using 1.2m songs scraped from the web, complete with the corresponding lyrics and metadata, it can output raw audio several minutes long based on whatever you feed it. Input, say, Queen or Dolly Parton or Mozart, and you’ll get an approximation out the other end.

Continue reading...

Vatican enlists bots to protect library from onslaught of hackers

Apostolic Library, facing 100 threats a month, wants to ensure readers can trust digitised records of its historical treasures

Ancient intellects are now being guarded by artificial intelligence following moves to protect one of the most extraordinary collections of historical manuscripts and documents in the world from cyber-attacks.

The Vatican Apostolic Library, which holds 80,000 documents of immense importance and immeasurable value, including the oldest surviving copy of the Bible and drawings and writings from Michelangelo and Galileo, has partnered with a cyber-security firm to defend its ambitious digitisation project against criminals.

Continue reading...

Robots gear up to march to the fields and harvest cauliflowers

Prototype technology could help alleviate growing shortage of human crop pickers

The job of harvesting cauliflowers could one day be in the mechanical hands of robots thanks to a collaboration between scientists and the French canned vegetable producer Bonduelle.

Fieldwork Robotics, the team behind the world’s first raspberry-picking robot, is designing a machine in a three-year collaboration launched on Monday.

Continue reading...

Microsoft sacks journalists to replace them with robots

Users of the homepages of the MSN website and Edge browser will now see news stories generated by AI

Dozens of journalists have been sacked after Microsoft decided to replace them with artificial intelligence software.

Staff who maintain the news homepages on Microsoft’s MSN website and its Edge browser – used by millions of Britons every day – have been told that they will be no longer be required because robots can now do their jobs.

Continue reading...

New blood test can detect 50 types of cancer

System uses machine learning to offer new way to screen for hard-to-detect cancers

A new blood test that can detect more than 50 types of cancer has been revealed by researchers in the latest study to offer hope for early detection.

The test is based on DNA that is shed by tumours and found circulating in the blood. More specifically, it focuses on chemical changes to this DNA, known as methylation patterns.

Continue reading...

Scientists develop AI that can turn brain activity into text

Researchers in US tracked the neural data from people while they were speaking

Reading minds has just come a step closer to reality: scientists have developed artificial intelligence that can turn brain activity into text.

While the system currently works on neural patterns detected while someone is speaking aloud, experts say it could eventually aid communication for patients who are unable to speak or type, such as those with locked in syndrome.

Continue reading...

AI program could check blood for signs of lung cancer

Scientists hope that if software passes trials it could boost screening rates

Scientists have developed an artificial intelligence program that can screen people for lung cancer by analysing their blood for DNA mutations that drive the disease.

The software is experimental and needs to be verified in a clinical trial, but doctors are hopeful that if it proves its worth at scale, it will boost lung cancer screening rates by making the procedure as simple as a routine blood test.

Continue reading...

Are flying taxis ready for lift-off?

To supporters, they are the solution to congestion. To critics, they’re just billionaires’ toys. So are they the answer to urban travel?

It’s right up there with meal pills, jetpacks, robot butlers and colonies on Mars. Since at least 1962, when the TV cartoon characters George, Jane, Elroy and Judy Jetson first took to the skies, flying cars have been a staple of speculative visions of the future. Designs for dozens of small, affordable, personal flying machines were unveiled in the latter half of the 20th century. Few became airborne and none took commercial flight.

Now, however, a form of flying car is set to escape the clutches of eccentrics and the confines of science fiction. A handful of well-funded startups, some backed by major aviation and car companies, have carried out test flights of electric vertical take-off and landing (eVTOL) aircraft. Piloted air taxi and shuttle services are expected before 2025. Uber says it expects to be operating aircraft without pilots by around 2030.

Continue reading...

‘It’s a war between technology and a donkey’ – how AI is shaking up Hollywood

The film business used to run on hunches. Now, data analytics is far more effective than humans at predicting hits and eliminating flops. Is this a brave new world – or the death knell of creativity?

If Sunspring is anything to go by, artificial intelligence in film-making has some way to go. This short film, made as an entry to Sci-Fi London’s 48-hour film-making competition in 2016, was written entirely by an AI. The director, Oscar Sharp, fed a few hundred sci-fi screenplays into a long short-term memory recurrent neural network (the type of software behind predictive text in a smartphone), then told it to write its own. The result was almost, but not quite, incoherent nonsense, riddled with cryptic nonsequiturs, bizarre turns of phrase and unfathomable stage directions such as “he is standing in the stars and sitting on the floor”. All of which Sharp and his actors filmed with sincere commitment.

“In a future with mass unemployment, young people are forced to sell blood,” says a man in a shiny gold jacket. “You should see the boy and shut up. I was the one who was going to be a hundred years old,” replies a woman fiddling with some electronics. The man vomits up an eyeball. A second man says: “Well, I have to go to the skull.” And so forth. An unwitting viewer might be unsure whether they were watching meaningless nonsense or a lost Tarkovsky script.

Continue reading...