Practice Update October 2025

27 October 2025

TI scams in Australia: how to spot them and stay safe

AI now makes it cheap and easy to fake a person’s face and voice. Scammers are using these “deepfakes” in calls, Direct Messages, emails and ads to push investment schemes, steal logins, or socially engineer payments.

Why this matters

  • Australians reported $2.03 billion in scam losses in 2024.
  • Losses were $2.74 billion in 2023.
  • On social media specifically, $43.4 million in losses was reported in just Jan–Aug 2024, with thousands of “celebrity-bait” deepfake pages and ads removed under Meta’s FIRE program, including those targeting Australian banks.
  • Globally, deepfakes now account for a significant share of biometric fraud, approximately 40% in 2024, highlighting just how convincing synthetic voice and video have become.

Real-world cases

  • Celebrity deepfake investment ads. Australia’s consumer watchdog has repeatedly warned about fake news pages and deepfake videos of public figures pushing trading platforms.
  • Deepfake video meetings. In a widely reported case, a finance employee in Hong Kong was tricked by a deepfaked CFO and colleagues on a video call into paying approximately US$25 million, illustrating the convincing and coordinated nature of these attacks.
  • Bank and exchange impersonation alerts. The AFP and the National Anti-Scam Centre (NASC) have issued warnings about bank impersonation scams and crypto-exchange impostors targeting Australians via text messages, emails, and phone calls.

What deepfakes look and sound like

  • Voice cloning: just a few seconds of audio can produce a near-perfect voice. Expect pressure, urgency and requests to move money quickly.
  • Video fakes: slick interviews, Zoom calls, or ads where lip-sync is almost perfect, backgrounds look subtly odd, or lighting on a face doesn’t match the room.
  • Image fakes: profile pics or proof screenshots with mismatched jewellery, blurred ears/hairlines, or warped text.

Tips

  1. Avoid the urgency: Scammers create a sense of panic. Hang up or leave the chat. Call back using a number you recognise (e.g., your bank card number, official website).
  2. Verify out of band: If a boss or family member asks for money, call a known number or set a pre-agreed safe word for video calls.
  3. Challenge the media: Ask the caller to perform a simple action live, such as turning left or showing today’s date on paper. Watch for unusual lighting, frozen teeth/tongue, out-of-sync blinks, or jerky shadows.
  4. Never click payment links in texts: Go directly to your bank app; don’t follow links or numbers supplied in the message.
  5. Treat celebrity money ads as scams: ASIC-licensed financial advertisements on major platforms in Australia are moving to stricter verification; if you don’t see clear provenance, assume it’s fake.

Practical prevention

  • Use passkeys or app-based 2FA (prefer authenticator apps over SMS where possible).
  • Use a password manager and create unique passwords for every account.
  • Use PayID namechecking; consider transfer limits and “cooling-off” delays for new payees.
  • Keep devices up to date; use built-in password and website warnings.
  • Hide your voice and video samples from public profiles where practical; lock down who can direct message or tag you.

Scenarios based on actual scam techniques

Scenario 1: The Urgent Bank Security Call

Characters:

  • John, a 52-year-old teacher in Sydney
  • Scammer posing as “Mary from his bank” using a cloned voice

What happened

John received a call late at night. The caller, sounding exactly like his bank’s security officer (based on a voice sample lifted from an old radio interview John once did), told him that his account was under cyberattack and he needed to transfer $25,000 into a safe-holding account urgently.

The caller used urgent, fearful language: “We can see criminals draining your account right now.” John, panicked, made the transfer through the link they texted him.

Implications

  • John lost $25,000, unrecoverable because he authorised the transaction.
  • He spent weeks dealing with ID theft risks after sharing personal details with the caller.
  • Emotional stress: loss of sleep, anxiety about financial security.

What could have stopped it

  • Stop & breathe: Urgent requests = red flag.
  • Verify out-of-band: Call back using the number on the back of the bank card, not the one given in the text.
  • Channel check: Banks never ask for transfers via text links or over the phone. 

Scenario 2: The Celebrity Investment Video

Characters:

  • Priya, a small business owner in Melbourne
  • Scammer running a fake crypto investment ad using a deepfake video of a famous Australian TV presenter

What happened

Priya saw a slick Facebook ad featuring a well-known TV presenter explaining how she “doubled her money” with a new crypto platform. The lip movements and voice were nearly perfect, yet clearly a deepfake.

She clicked through, spoke with support staff, and invested $10,000 via bank transfer, expecting guaranteed returns. The platform vanished after two weeks.

Implications

  • Total financial loss.
  • Ongoing spam calls targeting Priya for more investments — she was added to a victim list sold on the dark web.
  • No legal recourse: the ad originated offshore; complex jurisdictional issues.

What could have stopped it

  • Challenge the media: No genuine investment opportunity relies on urgency or secrecy.
  • Treat celebrity money ads as scams: ASIC warns that guaranteed returns are a scam.
  • Report immediately to Scamwatch and eSafety for ad takedown. 

Scenario 3: The Deepfake “Boss” on Video Call

Characters:

  • Li Wei, accounts officer in a Brisbane construction firm
  • Scammers impersonating her CEO and two other managers in a deepfaked Zoom call

What happened

Li Wei joined a Zoom call where she saw her CEO and two colleagues asking her to urgently pay $250,000 to a new overseas supplier. The faces blinked, nodded, and spoke naturally — but it was a fully AI-generated video based on real LinkedIn photos and YouTube speeches.

Trusting the “CEO,” she processed the payment. 

Implications

  • $250,000 company loss; internal investigation triggered.
  • Regulatory reporting obligations under anti-fraud and corporate governance rules.
  • Staff morale issues; fear of disciplinary action despite being a victim herself.

What could have stopped it

  • Challenge the media: Request a live “safe word” or a unique gesture during video calls.
  • Maker-checker control: Payments should require a second verification via a different channel (e.g., phone or SMS to the CEO).
  • Incident response drill: Staff need training for deepfake risks in payment authorisation.


20 January 2026
A real-world case study on trust distributions Mark and Lisa had what most people would describe as a “pretty standard” setup. They ran a successful family business through a discretionary trust. The trust had been in place for years, established when the business was small and cash was tight. Over time, the business grew, profits improved, and the trust started distributing decent amounts of income each year. The tax returns were lodged. Nobody had ever had a problem with the ATO. So naturally, they assumed everything was fine. This is where the story starts to get interesting. Year one: the harmless decision In a good year, the business made about $280,000. It was suggested that some income be distributed to Mark and Lisa’s two adult children, Josh and Emily. Both were over 18, both were studying, and neither earned much income. On paper, it made sense. Josh received $40,000. Emily received $40,000. The rest was split between Mark, Lisa, and a company beneficiary. The tax bill went down. Everyone was happy. But here’s the first quiet detail that mattered later. Josh and Emily never actually received the money. No bank transfer. No separate accounts. No conversations about what they wanted to do with it. The trust kept the funds in its main business account and used them to pay suppliers and reduce debt. At the time, nobody thought twice. “It’s still family money.” “They can access it if they need it.” “We’ll square it up later.” These are very common thoughts. And this is exactly where risk quietly begins. Year two: things get a little more complicated The next year was even better. They used a bucket company to cap tax at the company rate. Again, a common and legitimate strategy when used properly. So the trust distributed $200,000 to the company. No cash moved. It was recorded as an unpaid present entitlement. The idea was that the company would get paid later, when cash flow allowed. Meanwhile, the trust needed funds to buy new equipment and cover a short-term cash squeeze. The trust borrowed money from the company. There was a loan agreement. Interest was charged. Everything looked tidy on paper. From the outside, it all seemed sensible. But economically, nothing really changed. The trust made money. The trust kept using the money. The same people controlled everything. The bucket company never actually used the funds for its own business or investments. This detail becomes important later. Year three: circular money without anyone realising By year three, things had become routine. Distributions were made to the kids again. The bucket company received another entitlement. Loans were adjusted at year-end through journal entries. What is really happening is a circular flow. Money was being allocated to beneficiaries, then effectively coming back to the trust, either because it was never paid out or because it was loaned back almost immediately. No one was trying to hide anything. No one thought they were doing the wrong thing. They were just following what they’d always done. This is how section 100A issues usually arise. Slowly, quietly, and without any single dramatic mistake.
3 December 2025
Rental deductions maximisation strategies
28 October 2025
A Practical Guide to Running Your Family Business in Australia