Across Asia-Pacific and Japan (APJ), technology leaders are racing to embed AI into every stage of the software development lifecycle (SDLC). From Singapore’s financial institutions to India’s fast-moving e-commerce startups, coding assistants promise faster delivery cycles and new innovation capacity.
Yet, many teams are discovering the Engineering Productivity Paradox: even as AI generates dramatically more code, overall engineering velocity and quality aren't improving proportionately. The reason is simple: all that AI-generated code, just like human-written code, must still be reviewed, verified, and secured. This verification step has become the new bottleneck.
Observed LLM coding personalities
Engineering teams are increasingly treating Large Language Models (LLMs) as key contributors, not just accelerators. And just like human developers, these LLMs have their own "coding personalities" and resulting quality profiles. Understanding these distinct personalities is crucial for managing them and determining how they should be supervised.
Sonar’s research, which analysed 4,442 Java assignments from six leading LLMs, highlights core archetypes:
- The Baseline Performer (e.g., GPT-5 minimal): Delivers strong performance, but introduces a new class of risk.
- The Senior Architect (e.g., Claude Sonnet 4): Ambitious and sophisticated, prone to complex bugs.
- The Rapid Prototyper (e.g., OpenCoder-8B): Fast, concise, but tends to leave behind technical debt.
- The Efficient Generalist (e.g., GPT-4o): A balanced performer, but occasionally introduces subtle logical flaws.
See also: Asia’s cloud is now a sovereign asset but we’re still securing it like a tech product
This data demonstrates that AI is not a panacea; its output is complex, inconsistent, and often inefficient, increasing technical debt and introducing security concerns.
From Japan to India: not all AI coders think alike
The APJ region's software landscape is diverse and fast-moving, with rapid release cycles and varying regulatory expectations. These conditions amplify the consequences of mismatched AI coding personalities—whether a rapid prototyper leaves behind critical technical debt or a senior architect produces complex, fragile code.
See also: Philippine court convicts ex-mayor for human trafficking over role in scam centre
Across the region, early adopters are seeing the difference. South Korea’s consumer-tech firms use fast, concise models to accelerate iteration, while in India, e-commerce and fintech teams leverage AI to speed up payments and checkout workflows, paired with strong verification to safeguard transactions.
In sectors like enterprise tech in Singapore, where trust, compliance, and risk management are paramount, AI-assisted coding must go beyond merely accelerating development. Even high-velocity prototyping is only valuable when accompanied by structured verification, ensuring code is secure, maintainable, and production-ready.
Vibe, then verify
The solution to achieving productive AI adoption lies in a new operational philosophy: vibe, then verify.
- Vibe: Let AI assist with ideation and speed, using its strengths to unblock creativity and accelerate early-stage coding.
- Verify: Enforce rigorous, automated checks to ensure the generated code is secure, maintainable, and production-ready.
In contexts like Singapore’s banking sector, developers "vibe" with AI tools to draft regulatory logic faster, but they must verify rigorously for compliance and security, since even small flaws can create financial risk. For example, when OCBC built its internal generative AI platform, OCBC GPT, strong governance was required to manage internal and regulatory risks.
AI can increase creativity and speed, but true productivity depends on the confidence that every line of code, human or machine, can be trusted.
To stay ahead of the latest tech trends, click here for DigitalEdge Section
Solving the engineering productivity paradox
Generative AI gives software teams extraordinary acceleration, but without structured verification, that acceleration can produce more rework, more downstream bugs, and greater long-term risk.
The Engineering Productivity Paradox dissolves when organisations recognise two truths:
- AI coding personalities differ and must be understood and managed like human capabilities.
- Verification is what determines whether AI-generated code accelerates or burdens engineering teams.
Organisations that get this right aren't just using AI to write more code—they’re using it to write better, trusted code.
Building AI-enabled confidence
LLMs are powerful collaborators, but only when their distinct qualities are understood and their outputs verified. "Vibe, then verify" isn't just a philosophy; it’s the operating principle for sustainable, AI-enabled development in fast-moving APJ markets. The next generation of engineering productivity will come not from writing more code, but from building better, trusted code, together with AI.
Marcus Low is the vice president and managing director for Asia-Pacific and Japan at Sonar
