AI-driven fraud (voice cloning, video deepfake, synthetic-identity schemes, prompt-injection attacks against your own AI tools) has shifted from theoretical to operationally common: the documented losses from deepfake-enabled CEO fraud, fake job-applicant schemes and synthetic vendors are now multi-million-dollar per incident in many sectors. This report sets out the AI-fraud framework in your chosen jurisdiction and industry: the recent incident pattern, the legal and insurance framework around recovery, the regulator and law-enforcement engagement protocols, and the verification-design expectations that distinguish defensible from negligent practice. It documents the scenarios that have produced concentrated loss, the warning indicators, the impact ranges, and the controls framework (verification protocols, training, technical detection), with explicit triggers for engaging cyber-fraud or AI-incident counsel.
Reference material for informed readers, not advice.