EU Committee on Legal Affairs about the AI in criminal law and its use by LEAs in criminal matters
On 15 September 2020, the Committee on Legal Affairs published its 2020/2016 Opinion on the artificial intelligence in criminal law and its use by the police and judicial authorities (Law Enforcement Agencies-LEAs) in criminal matters. With its opinion, the Committee calls the Committee on Civil Liberties, Justice and Home Affairs to incorporate the recommendations, as stipulated in the Opinion, into its motion for a resolution.
The Committee acknowledges that AI and related technologies are increasingly being deployed in various sectors, including robotics, healthcare and transportation. It also highlights that such technologies can become a permanent constituent of the criminal law systems, given their statistical data analytics capabilities that can facilitate and enhance procedures in the context of prevention, detection and investigation of criminal cases.
On the other hand, the Committee underlines that due to the opaque elements of AI systems, any relevant tools deployed in the criminal justice context could have a severe impact upon fundamental human rights, in particular of suspects and accused persons in criminal proceedings. The Committee, inter alia, points the risks of discrimination, bias and privacy infringements that can arise from the use of AI tools by LEAs. On top of that, it stresses the need to assess evidentiary and liability issues, in case of errors associated with the operation of AI systems in criminal justice systems.
Taking into consideration the forementioned considerations, the Committee considers that a clear regulatory framework that articulates the LEAs’ limits as regards the use of such systems and provides the required safeguards, is necessary. In addition, the Committee mandates developers and other stakeholders, involed in the initial design and development of AI systems and services, to adhere to ethical principles and provide transparency as regards algorithmic decisions. Similarly, it highly recommends the development of strong codes of conduct in order to aid LEAs in delegating decisions to AI systems.
In addition, the Opinion reiterates the importance of the “human-in-command” principle and call LEAs to ensure that the final decision in the criminal context decision-making process is a human output while AI technologies should only possess a subordinate role. In any case, the Committee states that it should always be feasible for judicial authorities to justify and supply the rationale behind any decision taken with the aid of AI systems.
Finally, the Committee mandates that fundamental human rights should always be safeguarded. To this end, it clarifies that the “right to fair trial” encompass the individuals’ right to access to any data collected via AI systems as well as the right of defence, once their legal liability is challenged. Accordingly, LEAs should provide sufficient transparency and inform the public as regards their use of AI tools and systems.
See more here
EU Commission Report on Ethics of Connected and Automated Vehicles
The EU Commission Expert Group published a report to provide guidance on ethical issues arisen in the context of driverless mobility for road transport. The report’s objective is to foster a safe and responsible transition to connected and automated vehicles (CAVs). The Report, albeit reiterates the social benefits of the use of CAVs, it stresses the need to establish a set of ethical, legal and societal considerations to surround the former, in order to fully reap those benefits.
The Expert Group states that any safety improvements derived from the use of CAVs should be monitored via shared scientific methods and data while preserving compliance with legal and ethical principles. To this end, adherence to the principle of distribution of risk and protection of basic rights should be ensured. Accordingly, any data processing carried out by and within the CAVs should respect individuals’ rights to privacy and data protection and prevent bias while those processing activities should be explicable to the individuals in question.
The Report identifies three groups of stakeholders to which the recommendations refer. Thus, manufacturers and deployers, policymakers and researchers are all considered as persons of interest vis-à-vis their specific domain. The Report includes a set of twenty recommendations as regards the future development and use of CAVs. Inter alia, those recommendations stipulate:
For more see here
ICO’s regulatory work on England and Wales COVID-19 apps
The ICO has been vividly engaged in the development of contact tracing apps in the UK, from the start of the project, working closely with the Department of Health and Social Care (DHSCH), providing consultation for the safeguarding of individuals’ data protection rights. According to the Information Commissioner’s recent input, the ICO’s primary focus on the development of the apps was to ensure transparency, fairness and legality by-design in contact tracing apps.
The ICO has also reported that in response to its enquiries, the DHSC has provided its input on the requested Data Protection Impact Assessments (DPIAs) for the operation of the apps. Following the ICO’s feedback on this DPIA input, a series of improvements took place as regards the function of contact tracing apps vis-à-vis data protection considerations. Hence, the tracing apps are now equipped with greater transparency, including improved privacy information for the implications of the apps on individuals’ privacy and the steps to mitigate risks and exercise rights.
In addition, individuals are provided with the opportunity to speak to a person about any decision taken through the app, including the reasoning behind the algorithm. Finally, according to the ICO, COVID-19 contact tracing apps ensure clarity of data flows while they are better equipped in terms of security. Despite the improvements, the Information Commissioner will keep closely monitoring the app’s functionality and any data protection implications that may arise. To this end, the audit will cover on a rigorous basis the whole Test and Trace ecosystem.
See more here
Class action lawsuit against YouTube for unlawful use of minors’ data in the UK
A class action lawsuit has been brought by the international law firm Hausfeld and Foxglove, a non-profit tech justice organisation, against YouTube in the UK. The lawsuit against the tech company concerns allegations for unlawful targeting of up to five million under 13-age minors with addictive programming and data collection for advertising purposes.
Both UK and EU legislation stipulate higher threshold for the protection of minors’ data. In the UK the age at which a minor can lawfully consent to data processing, in its own right, is that of 13 years old (UK Data Protection Act). According to the allegations, YouTube does not have in place efficient user age requirements nor does it genuinely attempt to limit usage by minors. Instead, according to the claimants, YouTube advertises itself as a platform directed to children. On the other hand, the tech company refuses the allegations, maintaining that it is not for youngsters under 13s, referring to the YouTube Kids as the only platform explicitly dedicated kids’ app. However, the company has not claimed that no children under 13 years old use the YouTube.
The claimants are seeking damages from YouTube of more than £2.5 billion. If the case succeeds, it will be one of the largest representative litigation cases to date.
See more here