If you are using ChatGPT, you would probably want an AI policy. [I will not promote]
If you are using ChatGPT, you would probably want an AI policy. [I will not promote]

If you are using ChatGPT, you would probably want an AI policy. [I will not promote]

I’ve been looking into AI governance for my company recently so wanted to share some of my findings.

Apparently PwC put out a report saying 72% of companies have absolutely zero formal AI policy. For startups and small agencies i guess it would probably reach 90%?

Even if you’re only a 5-person team, doing nothing is starting to become a liability. Without rules, someone would eventually paste client data, financials, or proprietary code into ChatGPT to save time. Most of these tools train on user inputs, that’s a trouble waiting to happen.

You don’t need a 20-page legal manifesto. A basic 3-page Google Doc is plenty. It just needs to cover:

  • Which specific AI tools are approved for work.
  • A Red / Yellow / Green framework for what data can and cannot be pasted into them.
  • Rules for when AI-generated content must be disclosed to clients.
  • Who is in charge of approving new tools.
  • Consequences for violating the policy.

Obviously, have a lawyer glance at it before you finalize anything, especially if you handle sensitive data but even writing a DIY version using the bullet points above is 100x better than having nothing.

submitted by /u/helewrer3
[link] [comments]