On Friday, US President Joe Biden met at the White House in Washington with representatives of the seven leading technology companies in the artificial intelligence sector, to ask them for a commitment to respect higher standards of safety and user protection in the development of these new technologies. The seven companies in question – Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI – have undertaken, among other things, to place a “watermark”, i.e. a mark that signals the origin of the content, on any text, image, video or audio generated by artificial intelligence systems.
It is the first attempt to regulate the artificial intelligence sector in the United States and is therefore considered an important development, but for the moment the one stipulated by Biden has been defined only as a formal “commitment”: the European Union, on the other hand, has been working on regulating the sector for longer, and last month the European Parliament approved the bill known as the Artificial Intelligence Act, which aims to introduce a common regulatory framework for member countries on artificial intelligence software.
– Listen also:
“We must be clear and vigilant about the threats that emerge from new technologies,” Biden said in a conference after the meeting, arguing that “we will see more technological changes in the next 10 years than we have seen in the last 50”. In a statement, the presidency said that “innovation must not come at the expense of the rights and security of Americans”.
The decision to ask companies in the sector for a formal commitment without imposing laws and regulations has been described by US newspapers as a way to elicit greater collaboration from them with the institutions and not hinder a rapidly developing sector. Among other things, the companies have undertaken to research the privacy risks and possible problems of discrimination of some categories of people posed by artificial intelligence systems. They have assured that they will thoroughly test their software before making it public, including through the opinion of independent experts, and that they will share information on how to invest in IT security to reduce user risks. However, not everyone is convinced of the approach based on self-regulation, considered more risky than, for example, the European one based on new regulations imposed by the institutions.