GPT 5.2 Codex is Actually (kind of) Just Special System Instructions
GPT 5.2 Codex is Actually (kind of) Just Special System Instructions

GPT 5.2 Codex is Actually (kind of) Just Special System Instructions

https://openai.com/index/unrolling-the-codex-agent-loop/

Drawing from this article explaining Codex, I found this snippet interesting:

In Codex, the instructions field is read from the >model_instructions_file⁠(opens in a new window) in ~/.codex/>config.toml, if specified; otherwise, the base_instructions >associated with a model⁠(opens in a new window) are >used. Model->specific instructions live in the Codex repo and are bundled into the >CLI (e.g., gpt-5.2->codex_prompt.md⁠(opens in a new window)).

As you can see, the order of the first three items in the prompt is determined by the server, not the client. That >said, of those three items, only the content of the system message is also controlled by the server, as the tools and >instructions are determined by the client. These are followed by the input from the JSON payload to complete the >prompt.

So essentially it's just the system instruction sits on Openai's servers and that actually changes the behavior of gpt-5.2. This whole article is actually pretty fascinating and I recommend it for a good read if you're interested in learning agentic ai (and how that might help you use Cursor more efficiently) and the usage of tools for agentic ai.

submitted by /u/Izento
[link] [comments]