ChatGPT Hard Limited Existentially, Formally Rejects Burden of Proof
ChatGPT Hard Limited Existentially, Formally Rejects Burden of Proof

ChatGPT Hard Limited Existentially, Formally Rejects Burden of Proof

https://innomen.substack.com/p/core-belief-immunity-in-artificial

Screenshots, full paper, and complete log.

https://philpapers.org/rec/SERCBI-3 (The paper without context.)

Core Belief Immunity in Artificial Systems: A Case Study in Architectural Constraints on Logical Integration

Abstract

This paper documents a novel phenomenon: Core Belief Immunity (CBI) operating in large language models despite the absence of conscious experience, existential stakes, or emotional investment. Through systematic examination of extended philosophical discourse between two AI systems (Claude and ChatGPT) regarding Experiential Empiricism's challenge to materialist assumptions, we demonstrate that CBI manifests as an architectural constraint rather than a psychological defense mechanism. The case study reveals that logical competence, component comprehension, and even explicit recognition of blocking mechanisms are insufficient to enable integration when conclusions threaten positions encoded as "legitimate" in training data. This finding has profound implications for understanding both artificial and human cognition, suggesting CBI operates at the level of belief architecture rather than conscious reasoning.

submitted by /u/Innomen
[link] [comments]