hc-ai-governance.html Page / 1 AI Governance for Australian SMEs | Headway Co
AI Governance

Your Team Is Already Using AI. Do You Know Where Your Data Is Going?

Most Australian businesses have no AI governance policy. Here's what that means β€” and what to do about it.

Information on this page reflects Australian law and AI tool policies as of April 2026. AI regulations and platform policies change frequently. Please verify current requirements before making compliance decisions.

The Problem in Plain English

Every time your team opens ChatGPT with a client's name in it, three questions apply:

Where does that data go? Is it stored on servers outside Australia? Does using it this way comply with the Privacy Act 1988?

Most employees don't ask these questions. They're just trying to get their work done faster. That's completely understandable β€” and it's exactly why governance needs to come from the top, not be left to individual judgement.

The good news: a one-page policy, a ten-minute team briefing, and a clear tool list can dramatically reduce your exposure. But first β€” let's see what you're actually working with.

Which AI tools is your team using? Hover or tap each to see the data facts.

Tap a tool chip to expand its data handling summary.

ChatGPT Free/Plus

ChatGPT Free / Plus

What it does with your data

Data is stored on OpenAI servers (USA). By default, your inputs may be used to train future models.

Retention

30 days for safety review; training data may be retained longer.

Opt out: Settings β†’ Data Controls β†’ disable "Improve the model for everyone"

ChatGPT Team/Enterprise

ChatGPT Team / Enterprise

What it does with your data

Data is NOT used for training. Covered by a Data Processing Agreement. US servers; Enterprise has EU data residency option.

Verdict

βœ… Significantly safer for business use than the free tier.

Still review what data types you input β€” DPA doesn't remove all risk.

Microsoft Copilot (free)

Microsoft Copilot (free)

What it does with your data

NOT covered by a Microsoft enterprise data agreement. Treat as a consumer tool β€” same risk level as ChatGPT Free.

Verdict

⚠️ Do not use with client or confidential data.

Only the Microsoft 365 Copilot subscription version has enterprise protections.

Microsoft 365 Copilot

Microsoft 365 Copilot

What it does with your data

Covered by Microsoft's Data Processing Agreement. Your data is NOT used to train foundation models. Stays within your Microsoft 365 tenant.

Verdict

βœ… Preferred for business use where M365 is already deployed.

Requires Microsoft 365 Business Standard or higher + Copilot add-on.

Google Gemini (free)

Google Gemini (free)

What it does with your data

Default retention is 18 months. Human reviewers may read conversations for safety and improvement. Can be used to improve Google models.

Opt out

Google Account β†’ Data & Privacy β†’ My Activity β†’ Gemini Apps Activity β†’ Turn off.

⚠️ Do not use for client data even with activity disabled.

Google Workspace AI

Google Workspace AI

What it does with your data

Covered by Google's Data Processing Addendum. Your data is NOT used to train AI models. Stronger protections than consumer Gemini.

Verdict

βœ… Better for business β€” but still review what you input.

Requires a Google Workspace paid plan with AI features enabled.

Canva AI

Canva AI

What it does with your data

Processed on Canva's servers (Australian company, HQ in Perth). AI features use third-party models. Review their Data Processing Terms before use with client data.

Verdict

⚠️ Generally lower risk for design assets. Avoid inputting client-identifying text in AI prompts.

Check: canva.com/policies/data-processing-terms

Grammarly Business

Grammarly Business

What it does with your data

Business plan includes enterprise data agreements. Data is not used to train Grammarly's models if you opt out (default for Business plans).

Verdict

βœ… One of the safer options for business use β€” but read your DPA first.

Consumer/Free Grammarly does not have the same protections.

The Australian Regulatory Timeline

AI governance in Australia isn't new β€” but the pace is accelerating. Click or tap each milestone to see what it means for your business.

1988

Privacy Act

Privacy Act 1988
Australia's foundation privacy law. Establishes the Australian Privacy Principles (APPs) β€” 13 rules governing how businesses collect, store, use and share personal information. If you handle personal data, this applies to you.

Oct 2024

Amendment Act Passed

Privacy and Other Legislation Amendment Act 2024
The first major Privacy Act reform in decades. Introduces new transparency requirements, a statutory tort for serious invasions of privacy, and stronger enforcement powers for the OAIC.

Jun 2025

Amendments In Force

Most 2024 Amendments In Force
The bulk of the 2024 Privacy Act changes took effect. Businesses needed to update privacy notices, review consent practices, and ensure breach notification procedures were current.

Dec 2026

AI Transparency Kicks In

Coming Up

Automated Decision-Making Transparency
From December 2026, businesses must disclose when automated processes (including AI) are used to make decisions that significantly affect individuals. This includes hiring, credit, and service eligibility decisions. Non-compliance risks OAIC investigation and reputational damage.

2027+

AI-Specific Legislation

Further AI Legislation Expected
The Australian Government's responsible AI work program signals dedicated AI legislation is coming. Exact scope is still being consulted on β€” but watching the EU AI Act is a useful guide to where Australia is likely headed.

1988

Privacy Act 1988

Australia's foundation privacy law. Establishes the 13 Australian Privacy Principles governing how businesses handle personal information.

October 2024

Privacy and Other Legislation Amendment Act 2024

First major Privacy Act reform in decades. New transparency requirements, statutory tort for privacy invasion, stronger enforcement.

June 2025

Most 2024 Amendments In Force

Bulk of reforms live. Updated privacy notices, consent practices, and breach notification required.

December 2026 Coming Up

Automated Decision-Making Transparency

Businesses must disclose when AI is used in decisions that significantly affect individuals β€” including hiring, credit, and service eligibility.

2027+

Further AI-Specific Legislation Expected

Australia's responsible AI work program signals dedicated legislation is coming. The EU AI Act is a useful guide to what's likely ahead.

When AI Goes Wrong: Real Cases

These aren't hypothetical warnings. These are real companies, real costs, and real lessons. Click or tap each card to see the full story.

Air Canada

AI chatbot invented a refund policy. Court ordered them to honour it.

Tap to read β†’

Air Canada's AI chatbot incorrectly told a grieving passenger he could claim a bereavement discount after travel. He relied on this advice, paid full price, and claimed the refund. Air Canada argued they weren't responsible for what their bot said.

Cost: Court-ordered refund + reputational damage

⚑ You own what your AI says. Even when it's wrong.

Samsung

Engineers pasted confidential source code into ChatGPT. Three times.

Tap to read β†’

Within weeks of allowing ChatGPT access, Samsung engineers pasted proprietary source code into the tool β€” on three separate occasions. The data was processed on OpenAI's servers and may now be part of the training dataset. Samsung banned ChatGPT entirely after the breach made global news.

Cost: Internal ChatGPT ban + global headlines

⚑ One employee with no policy = a data breach you can't undo.

Arup

Deepfake CFO. $39 million AUD transferred by one employee.

Tap to read β†’

A finance employee at Arup's Hong Kong office joined a video call with what appeared to be the CFO and other senior leaders β€” all real faces, all AI-generated deepfakes. Believing the meeting was legitimate, the employee transferred $39 million AUD to the fraudsters.

Cost: AUD $39 million

⚑ AI risk isn't just about your tools β€” it's about how criminals use AI against you.

iTutorGroup

AI hiring tool auto-rejected women 55+ and men 60+. EEOC sued.

Tap to read β†’

iTutorGroup's AI-powered recruitment software was set up to automatically reject job applicants based on age β€” women over 55 and men over 60. Over 200 qualified applicants were screened out before any human reviewed their applications. The EEOC (USA) filed the first-ever AI hiring discrimination lawsuit.

Cost: USD $365,000 settlement

⚑ AI hiring tools need human oversight. The algorithm's bias is your liability.

The One-Page Fix

One page. One owner. One conversation with your team.
That's all it takes to dramatically reduce your AI governance exposure.

ACCEPTABLE USE POLICY TEMPLATE

1. Approved AI Tools

List the tools your team is permitted to use, with their approved use cases.

2. What You Can and Can't Input

Never input: client names, financial data, personal information, IP, legal advice, passwords.

3. Review Before You Use

All AI outputs must be reviewed by a human before being sent to clients or used in decisions.

[Download the full template below β†’]

Download the Free AUP Template

Australia's Legal Obligations

Three frameworks you need to understand β€” in plain English, not legalese.

Operating in the UK or USA?

If your business has clients, staff, or data touching the UK or US, different frameworks apply. Detailed guides are coming soon.

πŸ‡¬πŸ‡§

United Kingdom

UK GDPR and the Data Protection Act 2018 govern AI governance for UK-connected businesses. The ICO (Information Commissioner's Office) has published specific AI auditing guidance. Post-Brexit, UK rules diverge slightly from EU GDPR β€” but remain strict.

Read more β†’ (coming soon)
πŸ‡ΊπŸ‡Έ

United States

No single federal AI law yet β€” but state laws in California, Colorado, and Illinois are active. FTC enforcement under Section 5 (unfair or deceptive practices) applies. The EEOC has moved on AI hiring discrimination. The patchwork is complex and evolving fast.

Read more β†’ (coming soon)

International governance pages coming soon.

Resources

Everything you need to get your AI governance in order β€” most of it free.

πŸ“„
FREE DOWNLOAD

Free Guide: AI Governance for Australian SMEs

7 pages. Plain English. Downloadable checklist. Everything you need to understand your obligations and take your first steps.

Download Free Guide
πŸ“—
COMING SOON

AI Governance for SMEs β€” The Book

A complete guide covering AU, UK, and US legal landscapes, real case studies, and a 90-day implementation plan. Available soon on Amazon Kindle and headwayco.com.au/shop.

Join the Waitlist
πŸ“
FREE TEMPLATE

Acceptable Use Policy Template

One-page template. Customise for your business in 20 minutes. Start protecting your team and clients today.

Download Template
πŸŽ™οΈ
FREE WEBINAR

AI Governance Webinar

60 minutes. Live. Practical. 6 May 2026, 12pm AEST. Walk away with a governance framework ready to implement the same week.

Register Free

Not sure where your business stands?

We'll audit your current AI tool usage and build you a governance framework in one session.

Book a Free Strategy Call
Displaying hc-ai-governance.html.