So many discussions, expert panels, conferences, congressional hearings, judicial proposals, "guardrails"... but regardless of these public disawovels, no one has backed up a bit in terms of funding or development of AI projects.
On the contrary, they're getting more ambitious, expensive and resource-consuming.
So, if AI enterprise it indeed irreversible or somehow "destined", why is that?
Is there a historical purpose that is being fulfilled (teleology), an ultimate consequence of the scientific revolution (determinism), or is it purely contingent?
If it's any of the former, then why should we worry about safety at all, why not just let it rip through our conventions, values and beliefs?
[link] [comments]