Privacy Settings
This site uses third-party website tracking technologies to provide and continually improve our services, and to display advertisements according to users' interests. I agree and may revoke or change my consent at any time with effect for the future.
Deny
Accept All
Privacy Settings
This site uses third-party website tracking technologies to provide and continually improve our services, and to display advertisements according to users' interests. I agree and may revoke or change my consent at any time with effect for the future.
Deny
Accept All
Back to Newsroom
Press Release
September 3, 2025

New Report Reveals That Over 1/3 of Australian Professionals Expose Sensitive Company Data to AI Platforms

Share
Copy to clipboard

One in three Aussie workers expose sensitive company data to AI platforms, Josys warns

  • Over a third (78%) of professionals now use AI tools. However, 63% of users aren’t confident in their ability to use them securely and 70% of organisations have little to no visibility into what tools are actually being used. 
  • Over a third (36%) of employees upload sensitive company information to AI tools ranging from strategic plans (44%) and technical data (40%) to financials (34%) and internal comms (28%).
  • 1 in 4 users (24%) admit to uploading customer PII, while 18% share intellectual property and legal/compliance documents.
  • Sales and marketing teams lead the risk, with 37% uploading sensitive data, followed by finance and IT/telecoms (36%) and healthcare (31%).

SYDNEY, 3 September 2025 – More than 1 in 3 Australian professionals are regularly uploading sensitive company data including strategy documents, financials, and customer’s personally identifiable information (PII) into AI platforms, often without any formal oversight. This is one of the key findings from the Shadow AI Report 2025 by Josys. It warns that a surge in “shadow AI” — employee use of unauthorised AI platforms that bypass security protocols — is exposing Australian companies to serious compliance and data risks.

Despite the growth in AI adoption, 70% of organisations have moderate to no visibility into what AI tools are being used, creating massive blind spots. Smaller businesses are particularly vulnerable, with only 30% of companies with fewer than 250 employees feeling fully equipped to assess AI risks, compared to 42% of larger organisations. As economic uncertainties and job pressures mount, users eager to capitalise on AI’s productivity gains are unintentionally creating backdoors for data leaks and compliance violations. 

“Shadow AI is no longer a fringe issue. It’s a looming full-scale governance failure unfolding in real time across Australian workplaces,” said Jun Yokote, COO and President of Josys International. “While the nation is racing to harness AI for increased productivity, without governance, that momentum quickly turns into risk. Productivity gains mean nothing if they come at the cost of trust, compliance, and control. What’s needed is a unified approach to AI governance which combines visibility, policy enforcement, and automation in a single, scalable framework.”

AI is moving faster than governance

The report surveyed 500 Australian technology decision makers and reveals a worrying gap between AI usage and organisational preparedness:

  • Just 1 in 3 organisations (33%) are fully prepared to assess AI risks, with nearly 20% not prepared
  • 63% of professionals lack confidence in using AI securely, exposing a major readiness gap.
  • Despite operating in highly regulated environments, only around half of finance (52%), IT/telecom (55%), and healthcare (62%) teams report full preparedness.

As AI adoption accelerates, critical sectors and smaller businesses in Australia are becoming overwhelmed. Without effective technology oversight for policy enforcement and training, many organisations risk falling into a cycle of reactive governance and compliance failures.


Compliance under pressure

Nearly half (47%) of respondents cite upcoming AI model transparency requirements and Privacy Act amendments as top compliance hurdles. Despite the growing complexity, 50% still rely on manual policy reviews, while a third (33%) have no formal AI governance processes in place. Even among those with some level of oversight, only 25% believe their current enforcement tools are highly effective. This highlights a widespread gap between regulatory compliance and organisational readiness.

With recent reforms to the Australian Privacy Act and growing pressure for AI model transparency, Josys is calling for Australian organisations to take immediate and coordinated action. This includes auditing AI usage across all teams to close visibility gaps, automating risk assessments based on data sensitivity and business function, enforcing real-time policies aligned to role-based access and risk tiers, and ensuring the organisation is audit-ready with AI-specific compliance reporting. Without these foundations, Yokote says businesses risk falling behind not just in compliance, but in the trust and resilience needed for long-term productivity.

About Josys

Josys is the SaaS Management Platform that simplifies how IT works. Our holistic approach equips IT teams with 360-degree control over their SaaS applications by making it easier to visualise user access, analyse utilisation trends, and automate provisioning processes that will make IT operations run more efficiently.

About the research

This research was conducted by Josys, in collaboration with independent research firm Censuswide, based on a survey of 500 Australian technology decision makers across a range of sectors and company sizes. The study explores how the growing use of unsanctioned AI tools, known as Shadow AI is increasing risk for organisations, and underscores the urgent need for automated AI and policy governance frameworks.

No items found.