Now a Norwegian AI expert and Google's team of researchers are sounding the alarm. This is a deep worry for Waterhouse, who used to be the director of the IKT-Norge specialized industry interest organization. " It is an overwhelming amount of material that one needs to get familiar with to make up your mind on AI and too many are left vulnerable to getting manipulated.
For instance, Google-owned DeepMind recently published a report on just how dangerous AI-generated content can become—explaining that it comes with the risk of inciting mass disbelief in digital information. The report details four primary categories of abuse:
- Influence on opinions – 27% of cases
- Revenue generation – 21% of cases
- Fraud – 18% of cases
- Harassment – 6% of cases
The biggest fear of researchers is that these AI tools do not provide security mechanisms, making them so easy to low low-technical users. (For instance, convincing cases of misleading images that take the face of influential people like Donald Trump - or any popular figure in wildlife discussion groups.)
[link] [comments]