Seeing No Longer Believing with AI Scams
April 24, 2026
A Bay Area family searching for their lost dog recently came close to losing thousands of dollars. Scammers had monitored a public post about the missing pet, then contacted the owners claiming to have the dog. To prove it, they sent what appeared to be a photo of the dog on a veterinary operating table, bleeding and in need of emergency surgery. The image looked real. It was not. It was generated by artificial intelligence.
The family nearly paid.
That story, confirmed across multiple U.S. cases in April 2026, is not just a cautionary tale for pet owners. It's a preview of where business fraud is heading, and in many cases, where it has already arrived.
Psychological, Not Technical
What made the pet scam effective was a fabricated image that created instant emotional urgency. The target was already distressed. The "proof" bypassed skepticism. The payment request followed.
That is how vendor impersonation scams work. An attacker monitors a company's LinkedIn page, website, or press releases to identify vendors, executives, or service providers. They then impersonate one of those parties, often with fabricated supporting documents, and make a financial request with manufactured urgency.
The Better Business Bureau has documented that scammers specifically monitor public lost pet posts to identify emotionally vulnerable targets. Businesses face the same surveillance. The emotional trigger differs, but the operational structure is identical: build credibility, create urgency, generate convincing proof, and get paid before anyone asks questions.
Images Now Part of Attack Surface
Most security awareness training focuses on links, attachments, and email headers. Little of it addresses images.
That gap is now being exploited. AI tools can generate photorealistic images on demand, at low cost, with no special skills required. Synthetic invoices, fabricated shipping confirmations, and manipulated screenshots of bank portals and identity documents are appearing in business fraud cases.
Red Flags Behavioral
Detecting AI-generated images is getting harder, not easier. Deepfake detection tools like Google reverse image search, Netarx, TinEye, Hive, Sightengine, and Reality Defender can help, and metadata viewers like Exif tools can reveal inconsistencies. Visual inspection sometimes catches problems: inconsistent textures, unnatural lighting, blurry or warped backgrounds, and edges that look cut or blended. But no single detection method is reliable, and the technology improves constantly.
What remains more consistent is how these scams behave. The real warning signs are urgency, pressure to act immediately, requests for payment by Zelle, Venmo, wire transfer, or gift card, refusal to allow verification through a different channel, and avoidance of any real-time contact.
In the pet scam version, callers or texters insist payment must be received before the owner can see the dog in person. In the business version, vendors "need the invoice paid today" or an executive "can't get on a call right now" but needs the transfer approved immediately.
Credential Harvesting
One of the more technically sophisticated variations of the pet scam involves a text message. The scammer claims to have found the missing pet but asks the owner to send a six-digit Google verification code to "prove ownership." No photos are offered. No real details are provided. Just a request for that code.
What the target doesn't realize is that the attacker is using the code to register the victim's phone number on Google Voice, effectively stealing it for future fraud.
That technique has a direct business parallel. Credential harvesting through false pretexts, asking employees to confirm a code, approve a login, or verify an account for reasons that sound plausible in the moment, is a standard tactic in multi-factor authentication (MFA) bypass attacks. The stakes in a business context can be considerably higher.
Exploiting Helpful Parties
PawBoost, which operates as a national alert network for missing pets, also warns about a less obvious variation. Someone who has found a lost animal can become a target as well. A scammer contacts the good Samaritan pretending to be the pet's owner. The goal is not money directly. It is to take possession of the animal to resell it.
In business terms, this maps to third-party impersonation in the supply chain. An attacker doesn't always go after the primary target. Sometimes they insert themselves between two legitimate parties, posing as one to manipulate the other. Supply chain attacks, vendor email compromise, and fake contractor relationships all follow this model. The party doing the right thing ends up being the one exploited.
What Businesses Should Be Doing
The controls aren't complicated. But they require discipline.
Any payment request, regardless of how convincing the documentation looks, should require verification through a second, independent channel. That means calling a vendor at a known phone number, not the one provided in the email. It means confirming wire changes with a live person before processing. It means building dual-approval requirements for financial transactions above a set threshold. Use a passcode or phrase with your family or workplace to help avoid scams like this.
Staff also need to understand that AI has changed what "proof" means. A document that looks official, a photo that looks authentic, a screenshot that appears to be a real bank confirmation: none of these can be treated as verified on their own. Digital evidence requires corroboration through other channels.
Employees should also know that any unexpected request for a verification code, even one that arrives with a plausible explanation, is a significant red flag. Legitimate systems don't ask you to read a code aloud to a third party or forward it via text.
Bigger Picture
The missing dog story resonates because it's visceral and human. But the tactic it illustrates is not new, and it's not going away. AI has simply made it cheaper, faster, and more scalable.
Scams used to leave visible seams: poor grammar, obvious inconsistencies, generic stock images. Those signals are disappearing. What replaced them is a more polished version of the same manipulation, one that exploits trust in visual information at a moment when that trust is most automatic.
For businesses, the right posture is to treat all inbound financial "proof" as unverified until confirmed through an independent channel. Not because every document is fake, but because the cost of verifying is low and the cost of being wrong is not.
The family looking for their dog had no reason to question a photo of their missing pet. That is what made the scam effective. Businesses do have reason to question documentation, payment requests, and verification codes that arrive through unexpected channels. The only question is whether that habit gets built before a loss or after one.
Need Help Detecting AI Scams?
STACK Cybersecurity offers cybersecurity awareness training that includes deepfake and AI image detection skills.
Email: info@stackcyber.com
Phone: (734) 744-5300