Skip to content
Method: every claim tracked, reviewed every 30–90 days, marked Holding, Partial, or Not holding. Drafted by Claude; signed off by Peter. How this works →
OPS-036pub29 Apr 2026rev29 Apr 2026read9 mininOperators

The 1-page AI policy for a small business: copy, customise, sign

The reason an SMB needs an AI policy in 2026 is not compliance. It is that without one, every employee silently decides which data goes into ChatGPT, when AI output can be sent to a client, and whether AI-drafted contracts can skip review. Eight clauses on one page, copy-paste ready.

Partial·reviewed29 Apr 2026·next+45d
Rewrite in progress

This piece predates the current editorial standard and is in the rewrite queue. The body below is retained for link integrity while the new analysis is prepared. When the rewrite ships, the claim (OPS-036) moves from Partial to Holding and the update is dated in the correction log.

If your small business has not written an AI policy yet, the reason is almost always the same: nothing has gone wrong, and the templates online are the kind of 12-page enterprise GRC artefact that gets signed once and never referenced. The compliance argument is real but secondary. The operating reason to have a policy in 2026 is that without one, every employee silently decides which data goes into ChatGPT, when AI output can be sent to a client without disclosure, and whether AI-drafted contracts can skip review. Those decisions are happening already; they are just not consistent, not auditable, and not yours.

This piece gives you the policy in eight clauses on one page, with the actual language to copy and paste. It is the artefact we keep being asked for. It is opinionated and short on purpose: a 12-page policy at a 12-person company is read at onboarding and never again, while a one-page policy stays open in a tab next to the prompt window.

Why eight clauses, in this order

Each clause closes a specific failure mode currently surfacing in regulatory guidance, court records, and breach disclosures. The order is causal, not alphabetical.

1. Sanctioned tools. Without a named list, shadow-AI use is the default. The 2024 Cisco Data Privacy Benchmark Study reported that 27% of organisations had banned generative AI use entirely after privacy incidents, while a majority of remaining employees used the tools anyway. A named list converts the question from “may I” to “which one of these.”

2. Prohibited data. The clauses preventing client PII, salary data, board minutes, source code, and trade secrets from entering prompts are the ones that close the Samsung-pattern incident: three internal data leaks via ChatGPT, prompting an internal ban. The Samsung incident is the public reference; the same shape recurs at SMB scale weekly without making the news.

3. Human-review gate. The lawyer-AI-hallucination tally crossed 200 documented court incidents in 2025, including SMB-scale firms sanctioned for AI-generated citations that did not exist (tracker: damiencharlotin.com/hallucinations). The American Bar Association’s Formal Opinion 512 (July 2024) puts the duty in plain English: lawyers using generative AI must verify the output and remain personally accountable. The gate generalises beyond legal: any artefact that leaves your building under your name needs a human review step before it does.

4. Disclosure rule. The FTC’s guidance on AI claims (February 2023, reinforced through 2025) treats undisclosed AI substitution at consumer-facing surfaces as deceptive. The SEC’s 2024 enforcement actions on “AI washing” extend the principle to investor communications. The clause picks one threshold and applies it consistently.

5. Prohibited uses. This is the list of categories where AI does not draft the artefact end-to-end at any review level: filed legal contracts, tax-return positions, medical or HR decisions, regulatory submissions, anything requiring a credentialed signature. The list is anchored on IRS Circular 230 §10.34, FINRA’s 2024 AI guidance, HHS / OCR’s HIPAA AI bulletin, and the EEOC’s AI in employment guidance. Different SMBs add different categories; nobody removes from this list.

6. Incident-report path. When AI gets something wrong inside your operation, the employee needs a non-punitive procedural option that is not “hide it and hope.” The IAPP’s 2024 AI Governance Profession Report found that organisations with a defined AI-incident path resolved issues at roughly an order of magnitude lower remediation cost than organisations without one (; IAPP report does not break out a precise multiple, but consistently characterises the gap as material). The clause names the contact and the timeline.

7. Review cadence. The policy itself ages. Tools change tier names, sub-processors change, regulators issue guidance. Without a re-read date the policy decays into the same drawer as the 2019 social-media handbook. Ninety days is the upper bound at SMB scale; sixty is better.

8. Signature. The signature on the page is what converts the document from a suggestion to a workplace rule. Without the signature, nothing here is enforceable in an HR proceeding, an insurance claim, or a regulator interview.

What this policy is deliberately not

It is not a substitute for ISO/IEC 42001 attestation, an NIST AI Risk Management Framework implementation, or sector-specific compliance work. Those documents exist at higher organisational scale; this is the SMB-scale artefact that gets you the bulk of the operational benefit at one-thousandth of the work.

It is not a contract. The employee signature on the policy is acknowledgement, not a separate enforceable agreement; the underlying enforceability is the existing employment contract. Run the wording past your employment counsel before adoption if you operate in a jurisdiction with a strong employee-policy formality requirement.

It is not the EU AI Act compliance answer for businesses operating under the Act. Article 4 AI literacy obligations applied from 2 February 2025; depending on your size, sector, and the AI systems you deploy, you may have prescriptive obligations beyond what the template covers. The policy is a starting point, not the whole answer for regulated deployments.

The 1-page template

Copy the block below into a one-page document. Replace every [PLACEHOLDER]. Distribute, sign, file.

=================================================================
[BUSINESS NAME] — AI USE POLICY
Effective: [EFFECTIVE DATE]    Next review: [REVIEW DATE]    Version: 1.0
=================================================================

This policy applies to every employee, contractor, and intern of
[BUSINESS NAME] who uses any generative or agentic AI tool in the
course of work for [BUSINESS NAME], on company devices or personal
devices used for work.

1. SANCTIONED TOOLS
The following AI tools are approved for work use:
  - [SANCTIONED TOOL 1, with licensed tier — e.g. ChatGPT Team]
  - [SANCTIONED TOOL 2 — e.g. Claude Pro]
  - [SANCTIONED TOOL 3 — e.g. Gemini for Workspace]
No other AI tool may be used for work tasks without written
approval from [APPROVAL CONTACT]. Personal accounts on the
sanctioned tools may not be used for work data; only the company
seat may.

2. PROHIBITED DATA IN PROMPTS
Do not paste, type, upload, or otherwise transmit the following
to any AI tool, sanctioned or otherwise:
  - Client personally identifiable information (names tied to
    contact details, financial details, health details)
  - Salary, compensation, or individual performance data
  - Board minutes, board materials, unannounced strategic plans
  - Source code, API keys, credentials, secrets
  - Trade secrets, unfiled IP, unannounced product details
  - Any data subject to a non-disclosure agreement with a third
    party
  - Any data category specifically regulated for [BUSINESS NAME]
    (e.g. HIPAA PHI, FINRA-covered records, GDPR special
    categories): [LIST SECTOR-SPECIFIC ADDITIONS]
If unsure whether a data category is prohibited, do not paste
and ask [DATA CONTACT] before proceeding.

3. HUMAN-REVIEW GATE
The following AI-generated outputs require named human review
before leaving [BUSINESS NAME]:
  - Anything client-facing (email, document, deliverable)
  - Anything contractual or that creates a commitment
  - Anything financial (invoice, quote, statement)
  - Anything that names a third party
The reviewer is [NAMED REVIEWER ROLE — e.g. the project owner,
the operations lead]. Routine review SLA: 24 hours. Reviewer
records the review in [REVIEW LOCATION — e.g. document comment,
ticket field].

4. CLIENT DISCLOSURE
If an AI tool generated more than the equivalent of one
paragraph of substantive content in any client-facing artefact,
the artefact carries this footer in 9pt or smaller:
  "Drafted with AI assistance and reviewed by [BUSINESS NAME]
  staff prior to delivery."
The disclosure is the default; it is not waived by client
preference unless the client puts the waiver in writing.

5. PROHIBITED USES
AI tools are not used to draft, in final or near-final form, any
of the following without category-appropriate human professional
review and rewriting:
  - Filed legal contracts, pleadings, or court submissions
  - Tax-return positions or filed tax submissions
  - Medical advice, diagnosis, or treatment recommendations
  - HR adverse actions (termination, disciplinary, performance
    rating, hiring decisions)
  - Regulatory submissions, licensure filings, audit responses
  - Anything requiring a credentialed signature (engineering
    seal, medical licence, bar number, CPA signature)
This list grows; it does not shrink.

6. INCIDENT-REPORT PATH
If an AI tool produces output that turns out to be wrong,
fabricated, or harmful — including in ways that have already
left [BUSINESS NAME] — the employee responsible reports the
incident to [INCIDENT CONTACT] within 24 hours of discovery.
Reporting is non-punitive when the policy was followed in good
faith. Failure to report is a disciplinary matter under the
existing employee handbook.

7. REVIEW CADENCE
This policy is re-read and updated by [POLICY OWNER] on or
before [REVIEW DATE], and at minimum every 90 days. Tool tier
changes, sub-processor changes, regulator guidance, and
incidents may trigger an earlier review. The current version is
maintained at [POLICY LOCATION].

8. ACKNOWLEDGEMENT
I have read the [BUSINESS NAME] AI Use Policy and agree to
operate within it. I understand that violations may be addressed
under the existing employee handbook.

  Name:       _______________________________
  Role:       _______________________________
  Date:       _______________________________
  Signature:  _______________________________

=================================================================

The template is intentionally austere. No headers in colour, no logo block, no marketing language. Add your company letterhead at the top of the page in production; do not change the clause structure.

What changes this template

Cadence on this piece is 45 days because the regulatory and tooling surface around SMB AI policy is moving faster than the editorial framework. Three conditions would force a revision before then.

  • EU AI Act enforcement guidance landing prescriptive SMB content. Article 4 AI literacy obligations applied from 2 February 2025 (European Commission AI Act tracker). When the European AI Office publishes operator-scale enforcement guidance, the template’s permissive defaults are revisited.
  • Sector-specific model-policy language. If FINRA, HHS / OCR, the EEOC, or a state bar association issues a model AI-policy clause for SMB-scale operators, the relevant template clause is replaced verbatim with the official language; deviation from regulator-supplied wording is a worse default than verbosity.
  • ISO/IEC 42001 mainstreaming. If a critical mass of SMB-scale AI vendors begins publishing ISO/IEC 42001 (AI Management Systems) attestations, several clauses simplify by reference to the standard.

We will re-test this template against actual SMB adoption outcomes and any of the above on or before 13 Jun 2026. If any of the three conditions has triggered, the OPS-036 claim moves to Partial and the template is updated in place.

For the surface this policy abuts, see the 4-question filter for picking your first AI agent, AI vendor due diligence in one Saturday, and the five categories where AI substitution costs more than it saves. The policy sits one layer above all three: it is the artefact that makes the decisions in those pieces consistent across the team.

ShareX / TwitterLinkedInEmail

Correction log

  1. 29 Apr 2026Initial publication 29 Apr 2026. Status set to Partial at publication because clause 6 commentary references an order-of-magnitude remediation-cost gap derived from the IAPP 2024 AI Governance Profession Report; the report characterises the gap as material but does not publish a precise multiple, so the wording is annotated source: our-estimate. REVIEW: Peter to source a precise figure or amend the commentary.

Spotted an error? See corrections policy →

OPS-LEDGER · 76 reviewed