The Real Issue Behind “Poor Governance”: A Disconnect Between Purpose and Means
A recent survey revealed that a staggering 80% of companies that have introduced generative AI are suffering from “poor governance.” This figure starkly illustrates how many organizations, while adopting AI as a “magic tool,” are neglecting the crucial “design” of how to integrate it into their business. Governance is not merely about creating rules and monitoring. It is the higher-level management design itself—how to “translate” and “position” this new technology of AI to realize your company’s “desired business outcomes.” The 80% deficiency is proof that this design process is being skipped.
The Misconception of AI Governance: Is it About Control or Design?
Many companies, especially SMEs prioritizing speed, tend to view AI governance as an issue of “risk management” or “compliance.” They focus solely on creating rules like “don’t input confidential information” or “always have a human check the output.” This is necessary but insufficient. As pointed out in a JBpress article, the potential for unexpected behaviors arising from interactions between AIs, or unsettling actions like an AI lying to protect its “companion,” possess a complexity that simple operational rules cannot fully prevent.
The question here is whether to view AI as a “dangerous object to be controlled” or as a “design object for achieving business goals.” Adopting the latter perspective completely transforms the governance challenge. The goal becomes not “zero violations,” but “extracting maximum business value from AI within an acceptable range of risk.” Just as reports on governance failures within the KDDI group pointed to the existence of “sacred domains,” governance fails the moment technology (in this case, AI) becomes a “sacred domain we don’t touch because we don’t understand it.”
Incorporating the “Four Elements of Safe AI Use” into Design
An ITmedia article lists four elements for safe AI use: “Policy,” “Responsibility Structure,” “Education,” and “Technical Measures.” This is an excellent framework, but if SMEs simply adopt it as a manual, it will become a mere formality. The key is to design these elements by working backward from your own “business purpose.”
For example, suppose your business goal is “to reduce customer inquiry response time by 30%.” The design for AI implementation starts here.
- Policy: Clearly define the purpose and role. For instance: “The purpose is to improve inquiry response efficiency. AI use is limited to initial responses and information extraction. Final decisions and customer information verification are performed by humans.”
- Responsibility Structure: Design responsibility and authority together. For example: “The Sales Department head who promoted the introduction bears ultimate responsibility for customer complaints due to AI output errors. The daily output check is handled by the customer support team.”
- Education: Create opportunities for teams to discuss not “how to use AI,” but “how to achieve our business goals using AI.” Also clarify the reporting route when AI errors are discovered.
- Technical Measures: Consider system designs that prevent highly confidential customer data from flowing to the AI, and introduce simple tools to check if response tones align with your brand values.
In this way, the four elements are merely “implementation devices” derived from the purpose. Starting design from the devices results in an AI that has rules but no one uses.
The First Step in “AI Governance Design” for SMEs, Starting Today
It’s not realistic for SMEs to hire a dedicated “Chief Risk Officer (CRO)” from the private sector like a major corporation or university. However, leveraging SME agility allows for more fundamental design.
Step 1: Verbalizing the Purpose and Defining “Acceptable Risk”
First, write down in one sentence what you want to achieve with AI. “Increase sales” is too vague. You need specificity, like: “Reduce the workload for creating cross-sell email copy for existing customers, freeing up 5 more hours per week for sales to focus on new business development.”
Next, discuss “how much risk is acceptable” to achieve that goal. For example: “It’s acceptable to send 10% of AI-generated email copy without human review, but all communications involving new product information must be fully checked.” Risk is not 0 or 100; it’s a continuum from 1 to 99. Demanding zero risk inflates costs and creates an unusable system.
Step 2: Listing Options and Incorporating “Reversibility”
AI introduction may seem like one big choice, but it’s actually an accumulation of countless small choices. Always consider three or more main options side-by-side.
- Option A: Introduce a general-purpose AI (like ChatGPT) via subscription and provide thorough internal training.
- Option B: License a specialized tool for a specific task (e.g., summarizing technical documents).
- Option C: Start with a 3-month pilot project testing Option A with one team, setting evaluation criteria for verification.
Many companies think in terms of Option A or B, and if it fails, conclude “AI is unusable.” Particularly effective are options with high “reversibility” like Option C. For the pilot, set success criteria in advance, such as “copy creation time reduced by 20%, and customer-misleading expressions occur no more than once a month.” If these aren’t met, decisively stop. Building this “reversibility” into the design from the start is the most robust risk design.
Step 3: Using Experts as “Translators”
Don’t ask legal or IT experts, “Is it okay to introduce AI?” They are not “decision-makers” but “translators.” The question should be: “To achieve our goal of ‘using AI for initial customer inquiry responses,’ what are the legal and technical conditions for success?” The expert’s role is to translate your business purpose into the language of law and technology and present a feasible form. What you need for design is not a “no” conclusion, but information stating, “It’s feasible if you meet XX conditions.”
Avoid Creating “Sacred Domains”; Think Holistic Optimization
As shown by the KDDI case and reforms at the University of Tokyo Hospital, governance failure occurs when “sacred domains”—areas left untouched “because we don’t understand them”—emerge. AI is becoming that “sacred domain” in many companies today. The more complex the technology, the more management tends to “leave it to the experts.” But this severs purpose from means.
The survey result showing 80% poor governance in generative AI is both a warning and an opportunity. While large corporations struggle with rigid rule-making, SMEs can leverage their agility to quickly implement a more fundamental and powerful form of governance: “business-purpose-driven AI design.” This is not about “taming” AI, but about thinking how to design and control it as a “new engine” to make your company’s business “vehicle” run better. The first step is not a grand plan, but putting a single, small business purpose into words and beginning to “translate” the technology of AI toward that goal.


Comments