<span class="vcard">/u/YourMomThinksImSexy</span>
/u/YourMomThinksImSexy

I asked DeepSeek’s DeepThink version to roast itself. This is what it came up with (peep the reddit reference – I did NOT ask it to do that, lol).

The prompt was "DeepSeek, roast yourself!" It returned: "I’m like a know-it-all intern who’s read every Wikipedia page but still can’t figure out how to use a stapler. My ‘intelligence’ is just fancy autocomplete—I’ll write you a sonnet…

Asking China’s DeepSeek any question related to criticisms of China returns results with blatant lies, half-truths and dissembling. If it’s capable of lying so easily, can it really be trusted in other areas of questioning?

submitted by /u/YourMomThinksImSexy [link] [comments]