Published on 15th June 2020
In the various services it provides – agency work, recruitment or career management – the private employment services sector is committed to inclusive and diverse labour markets as well as fighting labour market discrimination. By speeding up recruitment processes, improving the candidate’s experience thanks to guaranteed feedback, preventing discrimination or identifying skills need and labour market developments, AI offers multiple opportunities to improve the functioning of labour markets. The recent experience of the Covid-19 pandemic has already demonstrated the value-added that such technology can provide in solving labour market activation, allocation and transition challenges.
Trust is however key when introducing new technologies and the World Employment Confederation-Europe recognizes the concerns of the European Commission on the application of AI in human resources as well as the requirements for mitigating these concerns introduced in its White Paper on AI released in February 2020. However, WEC-Europe believes that proposals to classify all AI applications related to recruitment and workers’ rights as ‘high risk’ is insufficiently substantiated and specific. We are convinced that the existing regulatory frameworks cover the technicality and application of AI in HR services. Existing requirements and guidelines on human oversight, accountability and explainability create a platform to deal with possible opaqueness in AI outcomes or functioning.
As explained in WEC-Europe’s response to the public consultation organized by the Commission, the private employment services sector considers that the White Paper insufficiently addresses where HR-related AI applications fall outside of the regulatory frameworks, or why existing European fundamental rights would not apply to technology deployed in the European Union and thus why additional regulatory action is needed in this respect. The GDPR extensively covers the key concerns addressed in the White Paper. As the GDPR has been put in place less than two years ago and is currently in the process of review, it is far too early to conclude the GDPR does not safeguard against the risks addressed for labour market AI application.
Increasing trust in AI
To ensure trust, our sector is committed to the development, use and implementation of lawful, ethical, and robust AI technologies. Next to the regulatory frameworks in place, the industry has a vast set of national and international initiatives to fight labour market discrimination1 and has dedicated to continuing these, irrespective of the application of AI technologies in recruitment. WEC-Europe has already taken the initiative to develop guidelines for its members to comply with the WEC-Europe Code of Conduct (including the provisions on non-discrimination) in a new, digitally enhanced labour market.
Yet, programming and deploying responsible recruitment AI start with an acceptance that AI will represent data inputs from a biased reality. Mitigating bias does not ‘just’ require more data, it requires the ‘right’ data: data that can help identify and potential biases. Yet, data that could be used to identify bias is currently off limits given it is classified as sensitive personal data in existing regulations. WEC-Europe does in no way seek to alter this classification but it calls on the European Commission to create a ‘safe space’ for businesses – including through public-private collaboration – to test and train AI to deal with data that currently cannot be processed in the framework of GDPR.
New products and services that leverage AI can only be successfully deployed if they are fair and explainable and benefit everyone involved. WEC-Europe stands ready to work with all European stakeholders to further the application of AI in the EU and European labour markets in particular.
For more explanations, read our full response to the European Commission’s public consultation on the White Paper on Artificial Intelligence.