The EU's AI proposals, aimed at regulating artificial intelligence based on its capacity to cause harm, are facing internal disagreement and potential watering down of regulations.
The proposed legislation focuses on regulating "foundation" AI models that are trained on massive datasets. These models, known as general-purpose AI (GPAI) systems, are capable of a range of tasks and are expensive to build.
There are only about 20 firms globally that can afford to develop these GPAI systems, and they will serve as the basis for numerous new applications.
The debate in Brussels revolves around the regulation of foundation models, with some advocating for less intrusive regulation and self-regulation through company pledges and codes of conduct.
The French, German, and Italian governments have recently changed their stance and are now advocating for less intrusive regulation, citing the need to foster innovation and competition.
The power of corporate lobbying in Brussels and European capitals is believed to have influenced the change in stance.
There are concerns about the lack of scrutiny and regulation of tech companies' actions in relation to foundation models, as evidenced by recent incidents.
The outcome of the EU's AI proposals remains uncertain, with the potential for regulations to be watered down.
[link] [comments]