AI program could check blood for signs of lung cancer

Scientists hope that if software passes trials it could boost screening rates

Scientists have developed an artificial intelligence program that can screen people for lung cancer by analysing their blood for DNA mutations that drive the disease.

The software is experimental and needs to be verified in a clinical trial, but doctors are hopeful that if it proves its worth at scale, it will boost lung cancer screening rates by making the procedure as simple as a routine blood test.

Continue reading...

Are flying taxis ready for lift-off?

To supporters, they are the solution to congestion. To critics, they’re just billionaires’ toys. So are they the answer to urban travel?

It’s right up there with meal pills, jetpacks, robot butlers and colonies on Mars. Since at least 1962, when the TV cartoon characters George, Jane, Elroy and Judy Jetson first took to the skies, flying cars have been a staple of speculative visions of the future. Designs for dozens of small, affordable, personal flying machines were unveiled in the latter half of the 20th century. Few became airborne and none took commercial flight.

Now, however, a form of flying car is set to escape the clutches of eccentrics and the confines of science fiction. A handful of well-funded startups, some backed by major aviation and car companies, have carried out test flights of electric vertical take-off and landing (eVTOL) aircraft. Piloted air taxi and shuttle services are expected before 2025. Uber says it expects to be operating aircraft without pilots by around 2030.

Continue reading...

‘It’s a war between technology and a donkey’ – how AI is shaking up Hollywood

The film business used to run on hunches. Now, data analytics is far more effective than humans at predicting hits and eliminating flops. Is this a brave new world – or the death knell of creativity?

If Sunspring is anything to go by, artificial intelligence in film-making has some way to go. This short film, made as an entry to Sci-Fi London’s 48-hour film-making competition in 2016, was written entirely by an AI. The director, Oscar Sharp, fed a few hundred sci-fi screenplays into a long short-term memory recurrent neural network (the type of software behind predictive text in a smartphone), then told it to write its own. The result was almost, but not quite, incoherent nonsense, riddled with cryptic nonsequiturs, bizarre turns of phrase and unfathomable stage directions such as “he is standing in the stars and sitting on the floor”. All of which Sharp and his actors filmed with sincere commitment.

“In a future with mass unemployment, young people are forced to sell blood,” says a man in a shiny gold jacket. “You should see the boy and shut up. I was the one who was going to be a hundred years old,” replies a woman fiddling with some electronics. The man vomits up an eyeball. A second man says: “Well, I have to go to the skull.” And so forth. An unwitting viewer might be unsure whether they were watching meaningless nonsense or a lost Tarkovsky script.

Continue reading...

Visa applications: Home Office refuses to reveal ‘high risk’ countries

Campaigners criticise decision not to reveal data in algorithm that filters UK visa applications

Campaign groups have criticised the Home Office after it refused to release details of which countries are deemed a “risk” in an algorithm that filters UK visa applications.

Campaigners for immigrants’ rights were sent a fully redacted list of nations in different categories of “risk”, which were entirely blacked out, on a Home Office response to their legal challenge over the artificial intelligence programme.

Continue reading...

AI system outperforms experts in spotting breast cancer

Program developed by Google Health tested on mammograms of UK and US women

An artificial intelligence program has been developed that is better at spotting breast cancer in mammograms than expert radiologists.

The AI outperformed the specialists by detecting cancers that the radiologists missed in the images, while ignoring features they falsely flagged as possible tumours.

Continue reading...

Go game master quits saying machines ‘cannot be defeated’

Lee Se-dol retires from Chinese strategy game after playing against Google algorithm

The only human ever to beat Google’s algorithm at the ancient Chinese strategy game Go has said he decided to retire because machines cannot be defeated.

Lee Se-dol’s five-match showdown with Google’s artificial intelligence program AlphaGo in 2016 raised both the game’s profile and fears of computer intelligence’s seemingly limitless learning capability.

Continue reading...

Ex-Google worker fears ‘killer robots’ could cause mass atrocities

Engineer who quit over military drone project warns AI might also accidentally start a war

A new generation of autonomous weapons or “killer robots” could accidentally start a war or cause mass atrocities, a former top Google software engineer has warned.

Laura Nolan, who resigned from Google last year in protest at being sent to work on a project to dramatically enhance US military drone technology, has called for all AI killing machines not operated by humans to be banned.

Continue reading...

Taylor Swift threatened to sue Microsoft over its racist chatbot Tay

According to Microsoft’s president, the singer already had trademark issues with the company’s US version of the Chinese chatbot XiaoIce, before it was plugged into Twitter – and became a Nazi.

Taylor Swift has claimed ownership over many things. In 2015, she applied for trademarks for lyrics including “this sick beat” and “Nice to meet you. Where you been?” A few months later, she went further, trademarking the year of her birth, “1989”. We now know it didn’t end there. A new book reveals that, a year later, Swift claimed ownership of the name Tay – and threatened to sue Microsoft for infringing it.

In the spring of 2016, Microsoft announced plans to bring a chatbot it had developed for the Chinese market to the US. The chatbot, XiaoIce, was designed to have conversations on social media with teenagers and young adults. Users developed a genuine affinity for it, and would spend a quarter of an hour a day unloading their hopes and fears to a friendly, yet non-judgmental ear.

Continue reading...

Apple made Siri deflect questions on feminism, leaked papers reveal

Exclusive: voice assistant’s responses were rewritten so it never says word ‘feminism’

An internal project to rewrite how Apple’s Siri voice assistant handles “sensitive topics” such as feminism and the #MeToo movement advised developers to respond in one of three ways: “don’t engage”, “deflect” and finally “inform”.

The project saw Siri’s responses explicitly rewritten to ensure that the service would say it was in favour of “equality”, but never say the word feminism – even when asked direct questions about the topic.

Continue reading...

The race to create a perfect lie detector – and the dangers of succeeding

AI and brain-scanning technology could soon make it possible to reliably detect when people are lying. But do we really want to know? By Amit Katwala

We learn to lie as children, between the ages of two and five. By adulthood, we are prolific. We lie to our employers, our partners and, most of all, one study has found, to our mothers. The average person hears up to 200 lies a day, according to research by Jerry Jellison, a psychologist at the University of Southern California. The majority of the lies we tell are “white”, the inconsequential niceties – “I love your dress!” – that grease the wheels of human interaction. But most people tell one or two “big” lies a day, says Richard Wiseman, a psychologist at the University of Hertfordshire. We lie to promote ourselves, protect ourselves and to hurt or avoid hurting others.

The mystery is how we keep getting away with it. Our bodies expose us in every way. Hearts race, sweat drips and micro-expressions leak from small muscles in the face. We stutter, stall and make Freudian slips. “No mortal can keep a secret,” wrote the psychoanalyst in 1905. “If his lips are silent, he chatters with his fingertips. Betrayal oozes out of him at every pore.”

Continue reading...

A ‘deep fake’ app will make us film stars – but will we regret our narcissism?

Users of Zao can now add themselves into the scenes of their favourite movies. But is our desire to insert ourselves into everything putting our privacy at risk?

‘You oughta be in pictures,” goes the 1934 Rudy Vallée song. And, as of last week, pretty much anyone can be. The entry requirements for being a star fell dramatically thanks to the launch, in China, of a face-swapping app that can decant users into film and TV clips.

Zao, which has quickly become China’s most downloaded free app, fuses the face in the original clip with your features. All that is required is a single selfie and the man or woman in the street is transformed into a star of the mobile screen, if not quite the silver one. In other words, anyone who yearns to be part of Titanic or Game of Thrones, The Big Bang Theory or the latest J-Pop sensation can now bypass the audition and go straight to the limelight without all that pesky hard work, talent and dedication. A whole new generation of synthetic movie idols could be unleashed upon the world: a Humphrey Bogus, a Phony Curtis, a Fake Dunaway.

Continue reading...

Apple halts practice of contractors listening in to users on Siri

Tech firm to review virtual assistant ‘grading’ programme after Guardian revelations

Apple has suspended its practice of having human contractors listen to users’ Siri recordings to “grade” them, following a Guardian report revealing the practice.

The company said it would not restart the programme until it had conducted a thorough review of the practice. It has also committed to adding the ability for users to opt out of the quality assurance scheme altogether in a future software update.

Continue reading...

Robocrop: world’s first raspberry-picking robot set to work

Autonomous machine expected to pick more than 25,000 raspberries a day, outpacing human workers

Quivering and hesitant, like a spoon-wielding toddler trying to eat soup without spilling it, the world’s first raspberry-picking robot is attempting to harvest one of the fruits.

After sizing it up for an age, the robot plucks the fruit with its gripping arm and gingerly deposits it into a waiting punnet. The whole process takes about a minute for a single berry.

Continue reading...

Amazon staff listen to customers’ Alexa recordings, report says

Staff review audio in effort to help AI-powered voice assistant respond to commands

When Amazon customers speak to Alexa, the company’s AI-powered voice assistant, they may be heard by more people than they expect, according to a report.

Amazon employees around the world regularly listen to recordings from the company’s smart speakers as part of the development process for new services, Bloomberg News reports.

Continue reading...

Facebook to use AI to stop telling users to say hi to dead friends

Algorithmic features have sent suggestions to wish happy birthday to those who’ve died

Facebook has promised to use artificial intelligence to stop suggesting users invite their dead friends to parties.

The site’s freshly emotionally intelligent AI is part of a rash of changes to how Facebook handles “memorialised” accounts – pages whose owner has been reported deceased, but that are kept on the social network in their memory.

Continue reading...

The rise of the killer robots – and the two women fighting back

Jody Williams and Mary Wareham were leading lights in the campaign to ban landmines. Now they have autonomous weapons in their sights

It sounds like something from the outer reaches of science fiction: battlefield robots waging constant war, algorithms that determine who to kill, face-recognition fighting machines that can ID a target and take it out before you have time to say “Geneva conventions”.

This is no film script, however, but an ominous picture of future warfare that is moving ever closer. “Killer robots” is shorthand for a range of tech that has generals salivating and peace campaigners terrified at the ethical ramifications of warfare waged via digital proxies.

Continue reading...

New AI fake text generator may be too dangerous to release, say creators

The Elon Musk-backed nonprofit company OpenAI declines to release research publicly for fear of misuse

The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed “deepfakes for text” – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.

OpenAI, an nonprofit research company backed by Elon Musk, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

Continue reading...