Will robots take our jobs? When will driverless cars become the norm? How is Industry 4.0 transforming manufacturing? These were just some of the issues addressed at CogX in London last month. Held in association with The Alan Turing Institute, CogX 17 was an event bringing together thought leaders across more than 20 industries and domains to address the impact of artificial intelligence on society. To round off the proceedings, a prestigious panel of judges recognized some of the best contributions to innovation in AI in an awards ceremony.
In his keynote speech, Lord David Young, a former UK Secretary of State for Trade and Industry, was keen to point out that workers should not worry about being made unemployed by robots because, he said, most jobs that would be killed off were miserable anyway.
He told the conference that more jobs than ever would be automated in the future, but that this should be welcomed. “When the Spinning Jenny first came in, it was almost exactly the same,” he said. “They thought it was going to kill employment. We may have a problem one day if the Googles of this world continue to get bigger and the Amazons spread into all sorts of things, but government has the power to regulate that, has the power to break it up.
Google’s DeepMind and UK hospitals made illegal deal for health data, says watchdog
A deal between UK hospitals and Google’s AI subsidiary DeepMind “failed to comply with data protection law,” according to the UK’s data watchdog. The Information Commissioner’s Office (ICO) made its ruling today after a year-long investigation into the agreement, which saw DeepMind process 1.6 million patient records belonging to UK citizens for the Royal Free Trust — a group of three London hospitals.
The deal was originally struck in 2015, and has since been superseded by a new agreement. At the time, DeepMind and the Royal Free said the data was being shared to develop an app named Streams, which would alert doctors if patients were at risk from a condition called acute kidney injury. An investigation by the New Scientist revealed that the terms of the agreement were more broad than hand been originally implied. DeepMind has since made new deals to deploy Streams in other UK hospitals.
Today, ICO said it had found “a number of shortcomings” with the agreement, particularly that patients had not been fully briefed on how their personal data would be used. In a press statement, the UK’s information commissioner Elizabeth Denham said that the “price of innovation does not need to be the erosion of fundamental privacy rights.”
Patients were not asked if they consented to having their medical data processed by DeepMind. The information shared included including details of drug overdoses, abortions, and whether individuals were HIV positive. DeepMind and the Royal Free have argued that patients had given “implied consent” to sharing, because this information would be used to deliver “direct care” via the Streams app.
Today’s ruling suggests that the two institutions did not go far enough. “Patients would not have reasonably expected their information to have been used in this way, and the Trust could and should have been far more transparent with patients as to what was happening,” said Denham.
The contract was always clear that no private data would ever be shared with DeepMind’s parent company Google, which bought the firm in 2014. Neither would machine learning or AI tools be used to analyze this information. (Although DeepMind is involved in two separate deals with UK hospitals to develop AI-powered algorithms for improving cancer treatment and eye disease.)
DeepMind says it welcomes ICO’s “thoughtful resolution” of the case, and admits it made a number of mistakes during its original deal. The company says it should have better explained the deal, to patients and the public, and that it “underestimated the complexity of the NHS and of the rules around patient data.”
In a blog post by the ICO, the watchdog stated that in the rush to innovate, institutions like the Royal Free Trust should not forget to follow the law. The Trust has been asked to sign a new agreement committing it to act in accordance with the law and commission an audit of the 2015 trial. “When you’re setting out to test the clinical safety of a new service, remember that the rules are there for a reason,” writes Denham.
It’s well known that robotics are already used in manufacturing to handle larger-scale and more dangerous work. What the panel also discussed are other possibilities AI offers, such as virtual personal assistants for workers to help them complete their daily tasks or smart technology such as 3D printing and its benefits for smaller companies.
Even our entertainment these days is driven by AI. The Industry 4.0 session ended on a lighter note with Limor Schweitzer, CEO at RoboSavvy, encouraging Franky the robot to show the audience its dance moves. Sophia, a humanlike robot created by Hanson Robotics, also provided entertainment at the CogX awards ceremony; “she” announced the nominees and winners in the category of best innovation in artificial general intelligence, which included my company Sherpa, Alphabet’s DeepMind, and Vicarious.
CogX also touched on the impact of AI on health, HR, education, legal services, fintech, and many other sectors. Panelists were in agreement that advances in AI must benefit all of us. While there are still many question marks about regulation of the sector, AI already permeates all aspects of our society.