Highlights from the Lords Select Committee on Artificial Intelligence

NB Team

Share now

The House of Lords select committee on Artificial Intelligence recently gathered to hear evidence from witnesses across academia, law, industry, and the media. We attended to hear the latest insights on AI. But also the potential recommendations which for the government when the committee reports early next year.

There were concerns from almost every group presenting to the committee. Mainly around how AI will impact the UK’s workforce. Witnesses expressed worry about the growing number of industries in which jobs could be threatened. Suggestions made included incentivising employees to retain in future technologies and preparing the next generation for an AI-enabled world of work.

The Financial Times Employment Correspondent, Sarah O’Connor, told the committee how AI could drastically improve the UK’s productivity. This continues to lag behind major trading partners such as the US, France and Germany. The ethical implications of growing up in an AI-enabled world were top of mind for witnesses in all fields. None more so than education. Although primary school children, through to university students, will probably learn about the ethical impact of intelligent technology on society.

Michael Wooldridge, Professor of Computer Science at Oxford University, said business must take responsibility for the technology they create. He believes that it is becoming increasingly difficult to track algorithms and understand how they come to conclusions. Similarly, he thinks that developers must be aware of unconscious bias in algorithms. After all, these algorithms are shaping our perspective of the world.

Witnesses before the committee raised a crucial question: if AI does go wrong, who is accountable? Alan Winfield, Professor of Robot Ethics at the University of West England, stated that regulation should enforce a chain of accountability. He feels the owner, not the creator, of the algorithm, is responsible. Moreover, he feels these algorithms should be held to the same standard as physical products. They must be subject to rigorous testing and the scrutiny of third party agencies.

For other witnesses of the select committee, data privacy was a concern. On the one hand, AI can achieve much in the field of medicine, but without limits, our privacy can be exploited for financial gain. Academics such as Dame Wendy Hall discussed the possibility of treating data as a natural asset. Something that capitalises on the deep mines of information we each create. The likes of Tim Berners-Lee are working on personal databases, which give individuals complete control of their data. While projects like these are still in the pipeline, private organisations must ensure the consent process for data use is better understood by the public. Ultimately, data could champion AI, or see it fail. It depends on how the data is used.

With further evidence sessions to come, we won’t know the extent of the recommendations presented to the government until the report is published in March 2018. We expect the guidance will seek to protect our privacy while ensuring the UK can take advantage of AI progression.