🇯🇵 日本語 🇬🇧 English 🇨🇳 中文 🇲🇾 Bahasa Melayu

Is AI Surveillance a Panacea? The U.S. Subsidy Scandal Reveals “Governance Gaps”

Risk Design

The “25 Trillion Yen” Governance Flaw Exposed by AI Surveillance

The U.S. government is advancing plans to use AI (Artificial Intelligence) to monitor fraudulent subsidy claims. The target is approximately 25 trillion yen (approx. $157.5 billion USD) in “improper” payments. This news, reported by Nikkei, may initially appear as a “cutting-edge effort to prevent fraud with technology.” However, viewed through the lens of a governance designer, a completely different landscape emerges. It reveals a more fundamental problem: “Before trying to solve issues with technology, the original design itself was flawed.”

For SME managers, applying for subsidies and grants is a familiar challenge. They are often swamped with paperwork during application and burdened with usage reports after approval. The scale of 25 trillion yen is beyond imagination. Yet, regardless of scale, the essential governance challenge lurking here is universal. It is this question: before introducing an AI that “monitors the outcome (fraud),” why was a “fraud-prone application and approval process” designed in the first place?

The “AI-ization of Surveillance” Hides Fundamental Design Flaws

Let’s examine the U.S. case in detail. According to reports, the AI functions as a “watchdog,” learning from past fraud patterns to detect suspicious applications. It may indeed be more efficient than humans checking vast numbers of applications. However, there is a major pitfall here. The more AI-based ex-post monitoring is strengthened, the easier it becomes for an organization to look away from the “inherent design flaws in the process itself.”

The essence of governance lies not in “detecting violations” but in “designing mechanisms from the outset where violations are less likely to occur.” In the case of subsidy applications, the priority is to build processes where fraud is “physically difficult” to commit—through the design of application forms, decentralization of approval authority, simplification and automation of usage reports, etc. AI surveillance is merely a secondary or tertiary measure for when such primary preventive measures are insufficient.

The Blind Spot in “Application Processes” Relevant to SMEs

This directly relates to the daily operations of SMEs. Consider, for example, an employee expense reimbursement process.

  • Case A (Poor Design + Enhanced Monitoring): Create complex and unclear expense policies. Receipt submission methods are inconsistent. However, an accounting staff member later checks each receipt one by one, pointing out suspicious items. This is akin to “human AI” surveillance.
  • Case B (Process Design Optimization): Digitize expense claims, automatically setting approval flows by category. Receipts are uploaded instantly via smartphone photos. Policies are displayed right on the application screen. “Physical constraints” that make fraud difficult are built in from the start.

Case A follows the same logic as the U.S. government’s AI surveillance. Case B represents the “higher-order design” of true governance. The problem is that many organizations, citing cost and effort, tend to choose the path of Case A.

Integrating AI as an “Execution Tool,” Not a “Decision-Making Tool”

So, how should AI be integrated into governance? The answer is to position AI not as a “monitor,” but as a “tool that reliably executes excellent processes.”

In the expense reimbursement example, AI’s role is not to “look for suspicious receipts.” Rather, its functions should be like the following:

  • Automatically suggesting appropriate expense categories based on the applicant’s input.
  • Automatically routing to the optimal approver based on past approval patterns.
  • Cross-referencing with project budgets and providing advance warnings for potential overruns.

In this way, by embedding AI *within* the process to reduce human error and effort, you shrink the very gaps where fraud can creep in. This is the concept of “1-99 risk design.” While reducing risk to zero is impossible, it is possible to design processes that significantly lower the probability of occurrence and limit the impact when it does happen.

Practical Steps: Identifying Your Company’s “AI-Compatible Processes”

Here are three concrete actions SMEs can start tomorrow.

1. List “Repetitive Judgments”
First, identify routine judgments frequently made within the company. Examples include expense approvals, paid leave requests, initial responses to customer complaints, and purchase orders. For these processes, articulate “what criteria the person in charge uses to make each judgment.” This becomes the prototype of the “rules” to teach the AI.

2. Deconstruct Processes into “Input, Process, Output”
For example, for a purchase order: “Input” is the purchase request, “Process” is budget checking and approver assignment, “Output” is creating and sending the purchase order. Focus on the “Process” part where human error or delays most frequently occur. Consider whether simple automation (RPA) or AI assistance can be introduced here.

3. Ask Experts for “Conditions for Success,” Not Just “Feasibility”
When consulting legal or tax experts about introducing an AI tool, do not ask, “Is this legally okay?” This will only yield a “0 or 100” answer. Instead, ask for the “conditions for success”: “We want to automate and streamline this process. From legal and tax perspectives, what are the key points we must adhere to regarding data handling and record-keeping?” Experts cannot reduce risk to zero, but they can outline the conditions for success within an acceptable level of risk (the 1-99 range).

From “Surveillance” to “Design”: Elevating the Perspective on Governance

The U.S. government’s AI surveillance plan, while massive in scale, is a classic example of governance thinking leaning towards “ex-post monitoring.” This aligns with the tendency that as organizations grow larger, it becomes harder to change existing cumbersome processes, leading them to try to cover gaps with surveillance instead.

However, SMEs have a significant advantage. Their smaller size means the “cost of change” to redesign processes from scratch is relatively low. While large corporations invest heavily in AI surveillance systems, SMEs can be quicker to build “smart processes from the ground up” that leverage AI.

The key is not to be swayed by the technology of AI itself. AI is merely one “tool” for executing superior governance design more reliably and efficiently. The prior task is to articulate and design the optimal flow of decision-making and execution needed to realize your company’s “desired business.”

The news of using AI to monitor 25 trillion yen in fraud poses this question to us: “In your company, before strengthening surveillance after a problem occurs, how seriously are you designing mechanisms where problems are less likely to occur in the first place?” Confronting this question is the first step toward practical governance in the age of technology.

Comments

Copied title and URL