Rethinking privacy in the age of AI

Rethinking privacy in the age of AI


Share on Facebook
Share on Twitter
Share on Google+
Share on LinkedIn
+

Autonomous technologies driven by artificial intelligence (AI) are already being deployed for new life-enhancing and potentially life-saving uses such as disease detection, precision medicine, driving assistance, increased productivity, safety at work and education accessibility. While these developments are important and exciting, they also present new concerns over privacy, data security and ethics. Many of these issues will be discussed at the 40th International Data Protection and Privacy Commissioners Conference (ICDPPC), which will focus on ethics in a data-driven age and explore how to ensure dignity and fairness to individuals.

Intel has been working on an effort to use the Organization for Economic Cooperation and Development’s Privacy Guidelines (the OECD FIPPs) to “rethink privacy” for the AI era. This week at the ICDPPC Intel is releasing the next step in our rethinking of privacy, which is a detailed white paper on privacy and AI.

In the white paper, we identify several privacy premises that AI development must incorporate:

While technological evolution will inevitably lead to increased automation, that should not translate to less privacy protection.

Explainability needs more accountability, meaning organizations need to demonstrate they have the processes to explain how algorithms come up with specific results.

Promoting ethical data processing is critical. Protecting individuals and
their data goes beyond legal compliance requirements: it means embracing societal values and working to build much-needed trust in the technologies and their positive impact on people.

We must also recognize that privacy is about protecting who we are. AI impacts the risks to individuals in two ways: a) how others see us (creation of false information such as modified photos) and b) how we see ourselves (influence our opinions and beliefs).

Providing robust security of data is an important step to maintaining privacy. That is why strong encryption and de-identification techniques are critical to minimize risk.

The ICDPPC 2018 is going to be a valuable opportunity for fruitful discussion between policymakers, regulators, civil society and industry as we explore public policy solutions to the privacy challenges with AI.

We offer the following specific policy recommendations:

New legislation and regulatory initiatives should be comprehensive but also technology neutral and enable the free flow of data. Keeping up with the pace of technological advances is critical for privacy legislation. Comprehensive privacy laws that apply to all technologies will better cover the innovations of the future, while applying equally across industry sectors and competitors.

Organisations should embrace risk-based accountability approaches. This translates into proactively putting in place technical (privacy-by-design) or organizational measures (product development lifecycles and ethics review boards) to minimize privacy risks in AI and to address societal concerns.   

Foster automated decision-making while augmenting with safeguards to protect individuals.  Industry and governments should work together on algorithm explainability, so that individuals can understand how algorithms make decisions. Also, it is important for AI to be overseen by people. Different levels of human engagement with the technology will be required, depending upon the amount of potential risk from that technology’s use.

Governments should work to promote and encourage access to data. Beneficial measures would include opening up government data, supporting the creation of reliable datasets available to all, fostering incentives for data sharing, investing in the development of voluntary international standards (i.e. for algorithmic explainability) and promoting diversity in datasets.

Funding research in security is essential to protecting privacy. More work needs to be done on computing power, energy efficiency and connectivity to improve AI. Funding is also needed to conduct research in security areas such as homomorphic encryption, which will allow extracting meaningful information from encrypted data without the need to disclose personal information.

It takes data to protect data. Algorithms can help detect unintended discrimination and bias, identity theft risks or cyber threats. The more data that can be used to train those algorithms, then the better job those algorithms will be able to do in protecting against those threats.  As the full extent of AI’s benefits come into view, we must foster this innovation while protecting citizens’ privacy. Industry and government must work closely to pursue AI innovation and protect people. In other words, we must rethink privacy for the age of AI.

Fullscreen Mode

Share on Facebook
Share on Twitter
Share on Google+
Share on LinkedIn
+