Southeast Asia has become the center of global cyber scams, with high-tech fraud experiencing human trafficking. In countries such as Cambodia and Myanmar, criminal groups operate industrial-scale “pig slaughter” operations – SCAM centres consist of trafficked workers, forced victims of victims in wealthier markets such as Singapore and Hong Kong.
The scale is surprising: a United Nations estimates global losses for these programs $37 billion. And it may get worse soon.
The rise of cybercrime in the region has had an impact on politics and policy. Thailand reports a drop in Chinese tourists this year A Chinese actor was kidnapped and forced to work in a Myanmar-based scam; Bangkok is Struggle now Convince the tourists to come safely. Singapore has just passed an anti-scan law that allows law enforcement to freeze bank accounts of victims of scams.
But why is Asia notorious in cybercrime? Ben Goodman, general manager of Okta Asia Pacific, noted that the region provides some unique dynamics that make cybercrime scams easier to achieve. For example, the region is a “mobile-first market”: popular mobile messaging platforms such as WhatsApp, Line, and Wechat help promote direct connections between scammers and victims.
AI also helped scammers overcome linguistic diversity in Asia. Goodman notes that machine translation, while “an amazing use case for AI”, also makes it “more easy to be lured to click on the wrong link or approve something.”
The nation-state also participates. Goodman also pointed to accusation that North Korea is using fake employees to gather intelligence and bring much-needed cash into isolated countries.
New Risk: “Shadow” AI
Goodman is concerned about new risks to AI in the workplace: “Shadow” AI or employees using private accounts accessing AI models without company supervision. “That could be someone who prepares a presentation for business review, goes into chatgpt on his own personal account and produces images,” he explained.
This may cause employees to unconsciously upload confidential information to public AI platforms, thus “there may be a lot of risks in terms of information leakage.”

Provided by Okta
Agent AI can also blur the line between personal and professional identities: for example, something related to your personal email rather than an email from your company. “As a company user, my company provides me with the apps they use and they want to manage how I use it,” he explained.
But “I never used my profile for corporate services, nor did I use my profile for personal services,” he added. “The ability to portray who you are, whether it’s at work, using work services, living, and using your own personal services is the way we see how our clients and business identities.”
For Goodman, this is where things get complicated. AI agents have the right to make decisions on behalf of users – this means it is important to define a user as an individual or company competency.
“If your human identity is stolen, then the explosion radius that can quickly steal money from you or damage your reputation is even greater,” Goodman warned.