The European Union (EU) reached an agreement on Friday on the new Digital Services Act, which will force online platforms to moderate content and make algorithms more transparent, at the risk of paying multi-million dollar fines.
Reached after a final round of negotiations lasting more than 16 hours, the European agreement on this wide-ranging legislation, which complements the one already passed on digital markets, comes almost a year and a half after Brussels presented its first proposal in December 2020, and brings new obligations for Internet service platforms used by hundreds of millions of people in the EU.
From now on, according to Lusa, thousands of companies will have to have a European representative to operate in EU territory, remaining under the aegis of this new legislative package, which aims to be a new global standard against the proliferation of illegal content, misinformation and the opacity of the algorithms that regulate the content of social networks.
The tech "giants" - some 30 companies used by more than 45 million monthly users in the European Union - will be under direct supervision by the European Commission and will have to pay an annual fee of 0.05% on their global revenues to fund this surveillance, for which Brussels will hire new industry experts.
These technology groups will have to analyze their systemic risks annually and act to reduce them, especially illegal content with adverse effects on fundamental rights, democratic processes, public safety, gender violence and minors, and content with serious consequences for the physical or mental health of users.
The main tools to encourage digital giants to comply will be fines, which can be up to 6% of the offending company's global turnover. The new rules also provide, in the case of repeated serious violations of the requirements, for a ban on operating on European territory.
Digital companies will be required to moderate the content published on them with "adequate resources" and to remove illegal content, something that until now depended on a non-binding code of practice to which companies adhered voluntarily.
Under the agreement now reached, users will have a clearer procedure for reporting illegal content online and platforms will have to act quickly to remove it, as well as having to inform the whistleblower of the actions they have taken.
New safeguards are also included to ensure that such notices are processed in a non-arbitrary and non-discriminatory manner, and that consumers can purchase products or services online under stricter control of the merchants' identity.
The new law will also prohibit the collection of data on race, religion, sexual orientation or other sensitive subjects to target advertising, as well as ads directed at minors or interface design techniques intended to mislead the user into allowing their data to be tracked.
The user shall have the right to be given at least one option not based on tracking his profile to choose how content will be recommended to him, which option must be presented as clearly as those that use user data.
Additionally, major platforms such as Facebook or Twitter will have to give the Commission and member state authorities access to their algorithms, and in general, digital services will have to be more transparent about how the information that reaches each user is determined, revealing, for example, whether they use filters or automate content moderation.
The negotiation between the European institutions, whose final stages coincided with the Russian invasion of Ukraine, also introduced a new concept that was not in the initial proposal: a mechanism in case of crisis that Brussels can trigger on the recommendation of member-state experts.
This will allow us to analyze the impact of the major platforms' activities on the crisis in question and require them to take action to limit any urgent threats for three months.