LEGAL TECH NEWS REVIEW Week 8-15 June 2020, by Eleni Kozari

The EU Commission is consulting digital service providers, including online platforms, consumers, authorities as well as other stakeholders, in order to inform and formulate its proposals for the purported Digital Services Act package. The consultation will be open until 8 September 2020.

Acknowledging the outdated nature of the EU regulatory landscape regarding digital services, in particular when it comes to the role and responsibility of online platforms, the Commission seeks to articulate a proper framework that will ensure users’ digital protection, respect of fundamental rights and particularly the freedom of expression, fostering, at the same time, innovation and business’ growth equally across EU.

The Commission has already initiated specific principles for online platforms, regarding illegal online content and fair and transparent online business ecosystem, which is entering into force in July 2020 (Regulation (EU) 2019/1150). The e-Commerce Directive’s basic principles, i.e. freedom of provision of digital services across EU and limitation of liability for users’ generated content, will formulate the first thread of rules of the package. On top of that, the Commission will seek to establish more clear rules for digital intermediaries, ensuring equal enforceability within the EU single market.

The second thread of the rules seeks to address market imbalances where certain online platforms act as gatekeepers, hindering innovation and competition. To this end, general rules initiating non-personal data access obligations and data interoperability and portability provisions, would align with the Commission’s objectives.

See more here


EDRi’s response to EU Commission’s AI White Paper

EDRi (European Digital Rights) has submitted its response to the Consultation of EU Commission on the AI White Paper. Underlining AI’s implications on people, society and democracy, EDRi advocates for a truly ‘human-centric’ and fundamental right-based AI regulation.

According to EDRi, the decision on whether to invest or not in AI applications should be firstly justified upon scientific evidence, in particular when such applications can inflict negative outcomes upon people, taking into consideration that AI negative impacts can be of collective nature. To this end, EDRi advocates for additional legal limits and impermissible uses of AI, such as indiscriminate biometric surveillance and autonomous lethal weapons.

Given the non-transparent functionality of AI systems, EDRi expects that power imbalances will further expand, especially in public decision-making procedures that will deploy AI systems. Hence, meaningful democratic oversight and effective public engagement should constitute top priorities. In order to secure human-centric principles and objectives, EDRi suggests that each AI system should complete obligatory human right impact assessments that would assess and indicate all the system’s implications, from societal to governance and institutional implications.

See more here


California’s new bill on Facial Recognition

Amid wide criticism regarding police use of facial recognition, both by activists and academics, California is preparing a new bill, initiating a framework for the lawful use of facial recognition both by governmental agencies and private companies. A required condition in both cases, is the provision of prior information to individuals.

Although the supporters of the bill claim that the bill will offer a significant privacy solution, since it will regulate the use of facial recognition by commercial and public entities, many critics declare their considerations and concerns. According to the latter, the bill will only expand the use of facial recognition while it could prevent individuals from accessing health care, housing and other basic essentials. On top of that, the bill could further escalate police violence incidents and racial profiling.

See more here


Microsoft, IBM and Amazon to halt facial recognition sales to police

IBM’s CEO has announced that the Company will cease all its facial recognition business and will exit from the provision of facial identification as a service. In addition, IBM’s CEO expressed his considerations as to whether such technology should be deployed at all, given that it can be used for mass surveillance and violations of fundamental human rights, which contradicts to the Company’s values. He also underlined that the vendors and users of AI should audit, ensure and report that AI systems have been tested for bias, pointing out the importance of data quality.

See more here


Following IBM’s approach, Amazon announced that it bans the use of its facial recognition software (Rekognition) by police, imposing one-year moratorium. Within this year, Amazon calls governments to design and implement appropriate regulations in order to govern the ethical use of facial recognition. However, Amazon’s facial recognition software has given rise to concerns even before the recent protests in the US. According to research’s indications, the software can hold racial and gender bias.

See more here


Finally, similar to IBM and Amazon, Microsoft has recently announced that it will also cease selling its facial recognition technology to the police. The Company will restart offering its technology only after a federal law regulating such issues and protecting human rights, will be enacted. To this end, the Company has supported the legislation in California that would regulate police use of the technology followed by certain restrictions.

See more here


 ICO- Coronavirus recovery and Data Protection

As lockdown restrictions and containment measures start to relax, companies will return to their everyday business reality, including data processing and data sharing needs. To this end, the ICO has published six data protection steps for organisations, as a guidance for companies on the use of personal information.

In determining the necessity of collecting staff’s health data, organisations should consider and demonstrate that their approach is reasonable, fair and in line with the proportionality principle. In addition, collection of data should be limited to the necessary data, including COVID-19 related data such as test results, for the proper and effective implementation of the safety measures.

Accordingly, ICO mandates organisations to be fully transparent vis-à-vis their employees as regards the purpose of the personal data collection, retention periods, data recipients as well as any potential implication that the employees could face due to the collection of their data. To this end, organisations should thoroughly consider about possible detriments and ensure that they avoid discrimination issues.

Organisations should adopt proper data security measures and in case of symptom checking or testing measures, they should first determine the appropriate legal basis and conduct a data protection impact assessment, when they intend to process health data on a large scale.

See more here