The European Economic and Social Committee proposed on 14 November that the EU should develop a certification for trustworthy artificial intelligence (AI) applications, to be delivered by an independent body. The proposal is in the form of two EESC opinions assessing the EU Commission’s ethical guidelines on AI.
AI systems and machine learning are so complex, that even their developers can not predict their outcome and have to develop tools to test their limits. In this context, both opinions propose a certification that would increase public trust in AI in Europe.
The EESC calls for entrusting the testing to an independent body, which would test the systems for prejudice, discrimination, bias, resilience, robustness and particularly safety. Companies could use the certificate to prove that they are developing reliable AI systems, in line with European standards.
The Committee also stresses the need for clear rules on responsibility: it must always be linked to a person. Machines cannot be held liable in the case of failure, the EESC explains.
The assessment list will be reviewed at the beginning of next year. If found appropriate, the EU Commission will propose further measures.