AI trends, updates and resources to power your practice.

AI in Focus | Your monthly insights into AI's impact in accounting and finance.


The operational reality of using AI

AI is moving quickly from experimentation into daily use – and the gap between ambition and reality is becoming harder to ignore. While predictions of fully autonomous “digital employees” continue to grab headlines, firms are discovering that today's AI delivers value only when it's tightly scoped, well-governed and paired with human judgment.

For accounting and finance, this moment matters. AI can accelerate work, surface insight and reduce friction – but it cannot own outcomes. When deployed without proper controls and accountability, it introduces as much risk as efficiency.

The firms making progress aren't waiting for autonomous AI. They're using current tools intentionally – augmenting professionals, strengthening governance and keeping responsibility firmly in human hands.

We explored these realities at DCPA. You can hear more in CPA.com's hot seat AI series, featuring firm leaders and technologists from CPA.com's AI Working Group who are putting AI into practice today.

Looking for more? Check out other practical insights and resources on CPA.com/AI.

What's in focus this month

  • The AI “employee” was a no-show
  • The “build vs. buy" inversion
  • The fragmentation of compliance
  • The McKinsey reality check
 

The AI “employee” was a no-show

Read more →

What's new

Despite confident venture capital forecasts that 2025 would mark the arrival of the “digital employee” – autonomous agents capable of executing complex workflows without supervision – the desk next to you remains empty. Cal Newport's latest analysis exposes a critical divergence: While generative AI has mastered syntax and code synthesis, it has largely failed to develop the planning capabilities required to function as an independent worker. The “exponential curve” of agency has proven to be a plateau.

How it works

The disconnect lies in the architectural limitation of large language models (LLMs). These systems are fundamentally probabilistic token predictors, designed to guess the next plausible word in a sequence, not to maintain a coherent state over a long time horizon. True “agency” – the ability to break a vague objective (e.g., “reconcile these Q4 variances”) into a reliable, multi-step plan, execute it and self-correct errors – requires a reasoning framework that current transformer models do not natively possess. We are trying to build reliable logic engines on top of stochastic text generators.

Behind the news

The profession is currently pivoting from the “scaling hypothesis” – the belief that simply making models bigger would essentially birth reasoning – to specialized agentic frameworks. We are seeing a bifurcation in the market: Massive generalist models are stalling on reasoning benchmarks, leading to the rise of specific “environment” tools like Anthropic's just-launched Cowork feature, which can “see” your file system and execute tasks. This signals a tacit admission that raw intelligence isn't enough; models need structured, constrained environments to mimic the reliability of an employee.

Why it matters

For finance leaders and firm partners, this puts a dent into the “replacement theory” of headcount reduction in the near term. Though process is being made, the dream of fully automated audits or autonomous tax return preparation is effectively paused. The liability risk of treating current AI as an agent rather than a tool is massive; these systems hallucinate facts but cannot hallucinate successful workflows. Your strategy must shift from “automating roles” to “augmenting judgment,” as the human loop is not just necessary – it's the only quality control mechanism that works.

Our thinking

The failure of the 2025 digital employee prediction suggests we have been solving for the wrong variable. We don't need artificial employees; we need artificial analysts. The firms that will dominate this year won't be those waiting for an AI that can run the practice, but those who aggressively deploy AI to handle the cognitive grunt work – data extraction, preliminary coding and memo drafting – while doubling down on the human supervision required to validate it. The bottleneck isn't computing power; it's trust.

 

The “build vs. buy” inversion

Read more →

What's new

While the “digital employee” we discussed previously remains elusive, a different and perhaps more disruptive force has arrived: the digital developer. Martin Alderson's latest analysis suggests we are witnessing the early stages of a “SaaS deflation” event. Finance teams and technical leads are beginning to reject the annual software renewal rituals – price hikes and tiered feature-gating – in favor of building bespoke, internal tools using coding agents. The friction to build custom software has dropped so precipitously that the “buy” default is no longer a given.

How it works

The mechanism is the commoditization of code. Where building a custom dashboard or an API wrapper once required a dedicated engineering team and weeks of dev time, agentic coding tools like Anthropic Code or OpenAI Codex allow a single operator to spin up functional, secure utilities in minutes. We are moving from a world where you buy a massive, bloated platform to solve a 5% problem to a world where you deploy a disposable, purpose-built “micro-app” that solves that specific problem perfectly for near zero marginal cost.

Behind the news

This offers a sharp counter-narrative to the “agency gap” we just analyzed. While LLMs struggle with the long-horizon reasoning required to be an autonomous employee (Newport's thesis), they excel at the constrained logic required to write software (Alderson's thesis). The revolution isn't coming from AI doing the accounting; it's coming from AI building the tools that accountants use. The launch of desktop-integrated agents like Anthropic's Code and, even newer, Cowork – which can “see” your file system and execute tasks – signals that this capability is moving from the command line to the CFO's laptop.

Why it matters

For the CFO, this is a double-edged sword. On the P&L, it promises a massive potential reduction in software OpEx – why pay for a seat-based license when an agent can build you a permanent, owned solution? However, at the same time, it introduces a terrifying “shadow IT” risk. If every analyst could generate their own software to manipulate financial data, the governance challenge would shift from managing vendors to managing a sprawling fleet of home-brewed applications. The challenge for 2026 isn't negotiating with Salesforce; it's ensuring your internal tools don't break when the creator leaves.

Our thinking

The “stall” in autonomous agents and the “boom” in coding agents lead to a hybrid future. We won't have AI workers that replace us; we will have “super-empowered” professionals who act as their own CIOs. The competitive advantage for firms will shift from those with the biggest tech budget to those with the strongest “DevOps for non-coders” culture. The firm of the future doesn't just use software, it manufactures it on demand.


The fragmentation of compliance

Read more →

What's new

Three regulatory macrotrends converging in 2026 are forcing enterprises to abandon centralized compliance architectures. The EU AI Act's technical requirements take full effect in August, data localization mandates are accelerating across APAC and Latin America, and supply chain transparency regulations like the Digital Operational Resilience Act (DORA) are moving from principle to enforcement. The unified compliance playbook is dead. Organizations now face fragmented regulatory landscapes where U.S. states advance their own AI bills, China matures personal information protection laws (PIPL) enforcement, and India accelerates Digital Personal Data Protection (DPDP) Act implementation. This isn't regulatory evolution – it's jurisdictional fragmentation that converts compliance from centralized function to distributed operational burden.

How it works

The EU AI Act requires organizations to classify systems as prohibited, high-risk or limited-risk, with high-risk systems undergoing conformity assessments covering data quality, logging, documentation, lifecycle management, and continuous oversight. Meanwhile, data sovereignty mandates demand that citizen data remain within national borders, international transfers be controlled, and cloud providers undergo local compliance reviews to ensure access is regulated by domestic law. DORA in the EU sets standards for financial organizations to maintain resilience against information and communication technology (ICT) third-party provider disruptions through closer monitoring, consistent incident reporting and mandatory resilience testing. In the U.S., SEC cybersecurity disclosure rules impose stricter incident reporting and tie third-party security failures directly to regulatory exposure. The technical reality: enterprises must shift from centralized data processing to regionalized architectures with jurisdiction-specific vendor management.

Behind the news

This represents a fundamental inversion of the compliance value proposition. For two decades, global standards like SOX and GDPR created economies of scale – build one control framework, deploy it everywhere, audit once. That model is collapsing. Instead of federal legislation, U.S. states are advancing their own AI bills, with Colorado enacting its AI law while California and New York move forward with similar initiatives. The fragmentation is deliberate; governments are treating data infrastructure as strategic national assets, not commercial utilities. The second-order effect is vendor lock-in by jurisdiction – your cloud provider in Frankfurt can't serve Mumbai without triggering compliance reviews in both locations. The third-order effect is audit multiplication; every regional architecture requires separate evidence trails, control testing and incident reporting protocols that don't aggregate cleanly for enterprise-wide risk assessment.

Why it matters

For CFOs and audit committees, this demolishes three critical planning assumptions:

  1. Multinational companies now face multiple layers of regulation across different jurisdictions rather than following a single rule set, converting compliance from fixed cost center to variable operational expense that scales with geographic footprint.
  2. The "centralized controls with localized execution" model that justified offshoring and shared services centers doesn't work when each jurisdiction demands separate data architecture, vendor relationships and monitoring protocols. Your enterprise control framework becomes a coordination tax rather than efficiency gain.
  3. Regulators increasingly demand proof that contractual security commitments are actually being enforced in practice rather than simply existing on paper, which means compliance teams must provide technical, data-driven evidence of monitoring and controls over data flows, access and architecture for every vendor in every jurisdiction.

The liability exposure: Boards approving centralized technology strategies in 2026 are effectively betting that regulators won't enforce jurisdiction-specific requirements – a bet the insurance industry has already refused to underwrite.

Our thinking

For CPAs, this forces an uncomfortable question: When your audit scope expands from one enterprise control framework to multiple jurisdiction-specific architectures that can't be tested using consolidated sampling approaches, how do you maintain audit efficiency without compromising coverage? The profession's response to this question will determine whether compliance becomes a strategic function that guides enterprise architecture or a cost center that documents why centralized systems failed regional requirements.


The McKinsey reality check

Read more →

What's new

McKinsey's CFO survey reveals AI adoption in finance has exploded from 7% using gen AI for more than five use cases in 2024 to 44% in 2025, with 65% planning increased investment. Nearly two-thirds of organizations haven't begun scaling AI enterprise-wide because pilots collapse under real-world conditions, fail to adapt to new data, and remain poorly integrated into core processes. The gap between investment enthusiasm and operational reality is widening.

How it works

The finance teams actually seeing value from AI aren't running disconnected pilots – they're embedding AI into a small number of core finance workflows where data already exists and decisions recur frequently. In strategic planning, generative AI tools synthesize external signals, financials and operational data to accelerate scenario modeling and support faster decision cycles. In cost optimization, LLMs classify and normalize high-volume spend data, giving finance leaders a more granular and consistent view of where money is actually going. The common thread is not experimentation, but sustained integration into repeatable processes that influence margins, cash flow, and capacity.

Behind the news

What McKinsey's examples point to is a shift away from narrow task automation toward AI systems that support end-to-end finance workflows. The case studies highlight organizations that improved contract compliance, reduced manual reconciliation work, and increased visibility into supplier and budget data by applying AI to clearly defined problem areas. The advantage didn't come from sophisticated technology alone – it came from aligning AI deployment with specific business outcomes and embedding it into how finance teams already operate. The result is incremental but compounding gains in efficiency, insight and decision quality, rather than one-off productivity spikes.

Why it matters

For CPAs and controllers, this data demolishes five critical assumptions about AI readiness:

  1. Waiting for "perfect data" is a stall tactic – the companies winning are building use cases with today's messy data while strengthening foundations.
  2. The "transform everything at once" approach guarantees failure; domain-by-domain execution builds sustainable momentum.
  3. Piloting without a roadmap tied to business priorities means your experiments never scale.
  4. The biggest barrier isn't technology – it's adoption and change management that finance leaders consistently underinvest in.
  5. Automating fragmented processes just amplifies complexity; you must simplify and standardize workflows first.

The real liability exposure: Firms increasing AI investment without addressing these structural barriers are burning capital on tools that won't integrate into production environments.

Our thinking

The McKinsey data exposes an uncomfortable bifurcation in the profession. The firms stuck in pilot purgatory are treating AI as a software upgrade rather than a fundamental reimagining of how finance operates. For CFOs, the question isn't whether to invest in AI – it's whether your organization has the process discipline and change management muscle to convert tools into sustainable business value. The gap between pilots and production isn't technical; it's organizational and is about to become painfully visible in the next audit cycle when boards start asking why AI investment isn't showing up in productivity metrics.

 
facebook icon twitter linkedin rss
CPA.com
1345 Avenue of the Americas, 27th Floor
New York, NY 10105
888.777.7077
26JAN11050729150
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -