Ela Mezhiborsky, co-founder and president of guest screening platform Autohost, reveals how AI advancements are driving a shift in the nature and scale of hospitality fraud.
While hospitality has been racing to integrate AI into its operations – chatbots for customer service, dynamic pricing algorithms, personalised recommendations – a parallel and far more threatening AI revolution has been unfolding in the shadows.
Deloitte’s latest forecast delivers a sobering reality check: generative AI will enable $40 billion in fraud losses by 2027, up from $12.3 billion in 2023. That’s not just growth, that’s an explosion, and the hospitality industry is sitting directly in the blast radius.
The uncomfortable truth that not everyone has fully grasped is that every AI advancement that helps legitimate businesses also creates new opportunities for bad actors. The same generative AI that helps you create compelling property descriptions can now produce perfect fake government IDs in under five minutes. The voice synthesis powering your customer service bots can clone any person’s voice with just seconds of audio.
While we’ve been focused on using AI to improve operations and guest experience, criminals have been quietly building an entire fraud ecosystem. They’re not just using AI – they’re industrialising fraud. And if your defences are still calibrated for the threats of three years ago, you’re not just vulnerable – you’re defenceless.
The question isn’t whether you’ll be targeted by AI-powered fraud; the question is whether you’ll recognise it when it happens.
The new criminal toolkit: How AI weaponises fraud
To understand the magnitude of this threat, we need to examine how AI has fundamentally transformed the criminal playbook. This isn’t about making existing fraud “a little bit better”, this is about creating entirely new categories of crime that were impossible just two years ago.
Identity fraud 2.0: The death of (basic) document verification
Remember when creating a fake ID required expertise? Those days are over.
The old world of document forgery – requiring specialised equipment, printing skills, and significant investment – has been replaced by something far more dangerous. Criminals used to leave traces: wrong fonts, poor image quality, inconsistent formatting. These flaws were detectable.
Today, that entire playbook is obsolete. Services like “OnlyFake” now sell AI-generated government-issued IDs from 26 countries for just $15. Modern AI-generated documents contain valid checksums, consistent barcode data, proper metadata, and correct security feature placement based on analysis of thousands of real documents.
The reality is stark: any verification process relying solely on document photos essentially amounts to checking whether criminals bothered to spell names correctly.
Synthetic identity fraud: Guests who never existed
Synthetic identity fraud doesn’t impersonate real people, it creates entirely fake humans who never existed. Criminals steal real data, use AI to generate fake personas with realistic faces and backstories, build credibility over time, and then execute at scale. One criminal organisation can manage hundreds of synthetic identities simultaneously.
Synthetic identity fraud now accounts for 80-85 per cent of all identity fraud cases and costs the financial industry over $30 billion annually. Unlike traditional identity theft, there’s no victim to notice or report it. These are ghost guests, completely fabricated people with completely fabricated histories that appear completely legitimate in all databases.
Properties could be hosting guests who literally don’t exist. The fraud only becomes visible after the damage is done – after property is damaged, after chargebacks are filed, after incidents are reported.
Social engineering on steroids: The end of “gut feeling”
For decades, hospitality professionals relied on instinct. Broken English? Red flag. Poor grammar? Suspicious. Vague requests? Warning sign. AI has eliminated every single one of these tells.
Advanced Language Models now craft flawless, human-sounding messages that build instant trust. They write with perfect grammar in multiple languages, maintain natural conversational flow, demonstrate cultural awareness, and simulate emotional intelligence. Modern AI-powered social engineering includes personalised manipulation that analyses booking data to craft targeted messages and learns from successful patterns.
Research from Cornell University demonstrates that AI-powered social engineering can be more persuasive than average humans in crafting manipulative messages. Every trust signal that staff have learned to recognise – polite communication, reasonable requests, coherent stories – can now be perfectly replicated by AI.
Voice cloning and deepfakes: When seeing and hearing isn’t believing
Affordable voice cloning services starting under $15/month now offer voice replication requiring only seconds of source audio. A promotional video, a customer testimonial, a recorded call – that’s all a criminal needs to clone a manager’s voice with perfect accuracy.
In Hong Kong, fraudsters used deepfake video and audio to impersonate company executives on Zoom calls, convincing employees to transfer $25.6 million. The technology for real-time deepfake video generation is now commercially available and increasingly affordable.
Any verification process that relies on human judgment – phone calls, video chats, “trust your instincts” – is now compromised. The people staff think they’re talking to might not exist. The voices on the other end of the line might be perfect simulations.
Why hospitality is uniquely vulnerable
Hospitality is built on trust. Guests trust operators with their safety and comfort. Operators trust guests with their properties. Platforms facilitate this trust through verification, communication tools, and dispute resolution. AI fraud exploits the very foundation of this ecosystem.
Traditional trust signals are now compromised. Communication quality used to indicate legitimacy – poor grammar was a red flag but now AI eliminates those errors. Consistency used to matter – real people contradict themselves but AI-generated backstories are perfectly consistent. Emotional connection used to signal authenticity but AI can now simulate empathy and urgency more convincingly than many humans. Visual verification used to be reliable but deepfakes have made video calls unreliable.
The consequences are already visible. Mass booking manipulation is now fully automated. Money laundering operations use AI-generated synthetic identities to create shell companies and obscure illicit fund ownership. Human trafficking organisations use AI-generated identities and deepfaked verification selfies to book accommodations anonymously. Payment fraud operates at scale with AI-enhanced chargebacks and automated card testing.
Most operators are operating with a false sense of security because their verification processes were designed for yesterday’s threats. They were built to catch human criminals. They were not built to catch AI.
Closing the gap: From reactive to proactive
The fundamental problem is simple: most hospitality operations are defending against yesterday’s threats with yesterday’s tools. Manual screening is inconsistent, basic ID checks can’t recognise synthetic identities and AI-generated documents, and relying on “gut feeling” is obsolete when AI can replicate communication perfectly.
The good news? This isn’t inevitable. Solutions exist, and the hospitality industry isn’t defenseless. Modern screening operates differently, moving beyond reactive detection to proactive threat prevention through multi-layered verification, AI-powered threat detection, real-time risk scoring, and data correlation. But it all starts with awareness. Taking the first step means understanding your specific vulnerabilities, then implementing processes that fit your business – your size, your challenges, your growth plans. There’s no one-size-fits-all solution, but there is a path forward.
Start with honest introspection: Are you relying too heavily on manual review and “gut feeling”? What happens when a sophisticated fraudster targets your property?
Then ask the harder questions: What’s your actual fraud exposure right now? Is your team equipped to handle AI-driven threats? If a major incident happened tomorrow, could you prove you did everything in your power to prevent it?
These conversations matter. They need to happen with your management teams, your platforms, your technology partners.
The fraud landscape has fundamentally shifted. AI hasn’t just made fraud easier – it’s made it invisible, scalable, and nearly impossible to detect with traditional methods. This isn’t a future threat. It’s happening now. The least any of us can do is take the first step and refuse to turn a blind eye to this dark side of the AI revolution.





