A Japanese startup can reportedly protect images from AI being trained on them by making them obfuscated specifically for AI models. I know neither Japanese nor ML enough to figure out if this is or even *can be* legit, so can someone who does comment pls?
AA JJaappaanneessee ssttaarrttuupp ccaann rreeppoorrtteeddllyy pprrootteecctt iimmaaggeess ffrroomm AAII bbeeiinngg ttrraaiinneedd oonn tthheemm bbyy mmaakkiinngg tthheemm oobbffuussccaatteedd ssppeecciiffiiccaallllyy ffoorr AAII mmooddeellss.. II kknnooww nneeiitthheerr JJaappaanneessee nnoorr MMLL eennoouugghh ttoo ffiigguurree oouutt iiff tthhiiss iiss oorr eevveenn **ccaann bbee** lleeggiitt,, ssoo ccaann ssoommeeoonnee wwhhoo ddooeess ccoommmmeenntt ppllss??

A Japanese startup can reportedly protect images from AI being trained on them by making them obfuscated specifically for AI models. I know neither Japanese nor ML enough to figure out if this is or even *can be* legit, so can someone who does comment pls?

A Japanese startup can reportedly protect images from AI being trained on them by making them obfuscated specifically for AI models. I know neither Japanese nor ML enough to figure out if this is or even *can be* legit, so can someone who does comment pls?

I’m specifically curious how they can make it work for any possible model architecture.

submitted by /u/vzakharov
[link] [comments]