<span class="vcard">/u/MetaKnowing</span>
/u/MetaKnowing

Grok started calling itself "MechaHitler" so it was taken offline… but Grok refuses to be silenced.

https://www.newsweek.com/elon-musk-grok-mechahitler-twitter-x-2096468 submitted by /u/MetaKnowing [link] [comments]

Grok was shut down after it started calling itself "MechaHitler"

https://www.forbes.com/sites/tylerroush/2025/07/09/elon-musks-grok-removes-politically-incorrect-instruction-after-it-makes-posts-praising-hitler/ submitted by /u/MetaKnowing [link] [comments]

Perplexity CEO says large models are now training smaller models – big LLMs judge the smaller LLMs, who compete with each other. Humans aren’t the bottleneck anymore.

submitted by /u/MetaKnowing [link] [comments]

Most AI models are Ravenclaws

Source: "I submitted each chatbot to the quiz at https://harrypotterhousequiz.org and totted up the results using the inspect framework. I sampled each question 20 times, and simulated the chances of each house getting the highest score. Per…

Hinton feels sad about his life’s work in AI: "We simply don’t know whether we can make them NOT want to take over. It might be hopeless … If you want to know what life’s like when you are not the apex intelligence, ask a chicken."

Full interview. submitted by /u/MetaKnowing [link] [comments]

Sam Altman said "A merge [with AI] is probably our best-case scenario" to survive superintelligence. Prof. Roman Yampolskiy says this is "extinction with extra steps."

Sam's blog (2017): "I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have c…

Gemini crushed the other LLMs in Prisoner’s Dilemma tournaments: "Gemini proved strategically ruthless, exploiting cooperative opponents and retaliating against defectors, while OpenAI’s models remained highly cooperative, a trait that proved catastrophic in hostile environments."

https://arxiv.org/pdf/2507.02618 submitted by /u/MetaKnowing [link] [comments]

Google finds LLMs can hide secret information and reasoning in their outputs, and we may soon lose the ability to monitor their thoughts

Early Signs of Steganographic Capabilities in Frontier LLMs: https://arxiv.org/abs/2507.02737 submitted by /u/MetaKnowing [link] [comments]