The ethical use of data for training machine learning technology

Explore the RegTech Revolution

The ethical use of data for training machine learning technology

31st July 2019 Artificial Intelligence data management data protection financial system Fintech regulations strategy 0

Adoption of AI technology is accelerating at a rapid rate. Gartner has forecast that by 2020, AI will be a top-five investment priority for more than 30 % of CIOs. A McKinsey study estimates that tech companies are spending somewhere between $20bn and $30bn on AI, mostly in research and development.

While the social utility is AI technology is compelling there are legitimate voices of concern raised by The Guardian’s Inequality Project,when the data we feed the machines reflects the history of our own unequal society, we are, in effect, asking the program to learn our own biases.”

The AI technology bias and its potential unintended consequences are gaining the attention of policymakers, technology companies and civil liberties groups. In a recent article based upon an ABA Business Law Section Panel, Examining Technology Bias: Do Algorithms Introduce Ethical & Legal Challenges?

The panellist-authors noted that:

“With artificial intelligence, we are no longer programming algorithms ourselves. Instead, we are asking a machine to make inferences and conclusions for us. . . we may inadvertently teach the computer to replicate existing deficiencies — or we may introduce new biases into the system. From this point of view, system design and testing need to uncover problems that may be introduced with the use of new technology.”

So, what are the ethical considerations relating to a fair and transparent application of AI technology?

There must be an acknowledgement that AI technology requires a holistic approach. It’s not just technology or just conformance with specific laws but per as Dr Stephen Cave, executive director of the Leverhulme Centre for the Future of Intelligence at Cambridge said:

“AI is only as good as the data behind it, and as such, this data must be fair and representative of all people and cultures in the world. The technology must also be developed in accordance with international laws – all this fits into the idea of AI ethics. Is it moral, is it safe…is it right?”

Awareness of bias in AI applications is an important consideration in developing transparency in ensuring adherence to ethical standards. We can see efforts being made, to varying degrees, to recognize and deal with issues of bias by governments, the technology industry, and by independent organizations.

European Organizations Address AI Ethics

In 2018, the EU commissioned an expert panel which has published its initial draft guidelines for the ethical use of AI, “The use of artificial intelligence, like the use of all technology, must always be aligned with our core values and uphold fundamental rights.”

As Fortune Magazine has described it, the European Commission sees ethical AI as a competitive issue; Andrus Ansip, the EC’s vice president for digital matters, stated that “Ethical A.I. is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric A.I. that people can trust.”

The General Data Protection Regulation (GDPR) Chapter 3, specifically addresses possible adverse impacts of AI technology:: Article 22 “Automated individual decision-making, including profiling.” Article 22 potentially provides any individual subject to automated decision-making or profiling:

“The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

As such, Article 22 embeds the fundamental EU principle of non-discrimination, which goes back to Articles 18-25 of the Treaty on the Functioning of the European Union, ratified in 2007, Article 21 of the Charter of Fundamental Rights of the European Union, ratified in 2000, and even all the way back to Article 14 of the European Convention on Human Rights, ratified in 1953.

Some, including researchers at the Oxford Internet Institute, see Article 22 as also creating a “right to explanation” for AI systems that would require technology companies to reveal algorithms and source code.

The US Response to AI Ethics

In the United States, governmental efforts to examine AI have made far less progress as compared to the EU. The most recent effort at the federal level, the Algorithmic Accountability Act of 2019 (S.1108) sponsored by Senator Ron Wyden (D-OR) and Senator Cory Booker (D-NJ) (with a parallel House bill, H.R.2231, sponsored by Representative Yvette Clark (D-NY)), seeks “To direct the Federal Trade Commission to require entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments.”

The proposed law would require the Federal Trade Commission to enact regulations within the next two years to require companies that make over $50 million per year or collect data on more than 1 million people to perform an “automated decision system impact assessment.” However, unlike the GDPR’s transparency requirements (no matter how debatable), the proposed bill would not require those assessments to be made public.

Despite this lack of a transparency provision, the bill was quickly endorsed by a number of civil rights groups.

Tech Companies Taking the Lead on AI Ethics

There are credible and sincere effort by leading tech companies to address the ethical use of data. Microsoft developed a set of six ethical principles that span fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability.

In fact, of all of the major technology companies, Microsoft could perhaps be singled out as the most promisingly progressive on AI ethics issues, such as in this public statement by its President, Brad Smith, about recent issues with facial recognition technology:

“The only effective way to manage the use of technology by a government is for the government proactively to manage this use itself. .It may seem unusual for a company to ask for government regulation of its products, but there are many markets where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike.”

Meanwhile, Facebook has recently granted $7.5 million dollars to the Technical University of Munich to establish an Institute for Ethics in AI. Amazon, for its part, has collaborated with the National Science Foundation to establish a “Program on Fairness in Artificial Intelligence” – with an anticipated $7.6 million in available grants.

A recent survey by Deloitte highlighted the dilemma associated with the utility of AI on the one hand and its potential risks:

“A growing number of companies see AI as critical to their future. But concerns about possible misuse of the technology are on the rise. 76 per cent of executives said they expected AI to ‘substantially transform’ their companies within three years, while about a third of respondents named ethical risks as one of the top three concerns about the technology.”

As regulations tend to lag technological innovation the Deloitte report suggests that industry should take a proactive role in fostering transparency and accountability relating to the applications of AI technologies and how they impact privacy and security rights:

“Designing ethics into AI starts with determining what matters to stakeholders such as customers, employees, regulators, and the general public. Companies should consider setting up a dedicated AI governance and an advisory committee including cross-functional leaders and external advisers that would engage with stakeholders, including multi-stakeholder working groups, and establish and oversee the governance of AI-enabled solutions including their design development, deployment, and use.”

Ethical use of AI ought not to be considered only as a legal and moral obligation but as a business imperative. It makes good business sense to be transparent in the application of AI. It fosters trust and engenders brand loyalty.

The 2019 Edelman Trust Barometer further pleads companies need to make ethics and values a focus of AI development. The reasons are obvious: three-fourths of consumers today say they won’t buy from unethical companies, and 86% say they’re more loyal to ethical companies.

All credits for this article to the source below:

https://www.dig-in.com/opinion/the-ethical-use-of-data-for-training-machine-learning-technology

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *