Are you confident you know where your employees are using AI? If you answered yes, you’re likely wrong – and that overconfidence could expose your firm to unprecedented audit risk. 

AI in accounting, financial reporting, operations, HR, and legal is happening faster than most management teams and auditors realize. Consider: Agentic AI use in finance is projected to increase 6X in the next year; AI in recruitment is up nearly 500% since 2023; and 96% of legal professionals are using AI daily

While leadership focuses on traditional risk areas, staff have been deploying AI solutions across critical processes, from accounts payable (AP) and contracts to revenue recognition and technical accounting. The problem isn’t just that they’re using AI – it’s that they’re often doing it without proper governance and audit awareness, likely rapidly increasing risk in numerous areas of the business. 

The Hidden AI Adoption Crisis 

Companies are increasingly implementing AI solutions in areas that directly impact financial statements, yet auditors remain largely unaware of these implementations when they occur. This knowledge gap creates a perfect storm of audit risk that threatens the accuracy of financial reporting and the credibility of audit reports while simultaneously introducing cyber and data risks. 

Consider this scenario: In Q4, a manufacturing company implements an AI-powered contract analysis system to automate revenue recognition decisions at a subsidiary. The AI processes thousands of contracts, making materiality judgments and revenue timing determinations. HQ was never made aware of this implementation, and it never came up in quarterly review inquiries. Auditors discover this during fieldwork – not through disclosure, but through observation. How do you assess the completeness and accuracy of a quarter’s worth of AI-driven revenue decisions when you have no understanding of the system’s parameters, training data, or error rates? 

Too often, AI deployments are done by individuals who likely aren’t authorized to use specific tools or aren’t aware of the policies and risks associated with them. Because some AI tools don’t follow typical software deployment processes, it’s virtually impossible for management to grasp the company’s AI inventory. Still, when AI tools are implemented in compliance with firm policies, they may be the first of their kind, thus creating inevitable risk. Management’s process for evaluating and addressing relevant ITGC risks may not be mature. 

Accounting AI Deployment Without Auditor Awareness 

AI adoption is most prevalent in these financial reporting areas: 

Document Processing and OCR Technology 

The lowest-hanging fruit for AI implementation includes automated processing of invoices, contracts, and supporting documentation using optical character recognition (OCR) software and intelligent document processing (IDP) tools. While seemingly routine, these systems directly impact: 

  • AP accuracy and completeness. 
  • Contract liability recognition. 
  • Revenue timing and measurement. 

Common platforms include UiPath, Klarity, and Rossum

Revenue Recognition Automation 

Companies are leveraging AI to interpret contract terms, determine performance obligations, and calculate revenue allocation. This is particularly complex for AI solution providers themselves, who face intricate revenue recognition challenges under ASC 606 due to varied monetization models, including flat fees, token-based pricing, and usage-based billing. 

Trullion, HighRadius, and FloQast in particular are commonly leveraged for enhanced revenue recognition capabilities. 

Predictive Analytics for Estimates 

AI systems are making increasingly sophisticated estimates for allowances, reserves, and fair value measurements – areas traditionally requiring significant auditor judgment and testing. 

The Governance Gap: Why AI Controls Are Failing 

The fundamental issue isn’t AI adoption itself – it’s the absence of appropriate AI governance frameworks. Most organizations implementing AI solutions lack: 

  • Explainability requirements: Finance teams often cannot articulate how their AI systems reach conclusions, making audit trail reconstruction impossible. 
  • Traceability standards: Without clear documentation of data inputs, processing parameters, and output validation, auditors cannot assess the reliability of AI-generated financial information. 
  • Quality control measures: The distinction between Large Language Models (LLMs) with high hallucination rates and Small Language Models (SLMs) with better accuracy is lost on most teams, leading to inappropriate tool selection for critical processes. 
  • Centralized AI inventory: Many organizations fail to maintain a comprehensive catalog of all AI systems in use, leading to fragmented oversight, redundant tools, and blind spots in risk management. 
  • Model risk assessment and monitoring: Continuous evaluation of AI models for performance degradation, bias, and unintended consequences is often overlooked, leaving organizations vulnerable to risk. 
  • Ethical and legal compliance: Teams frequently deploy AI without ensuring adherence to ethical guidelines or regulatory requirements, exposing the organization to potential lawsuits, fines, and public backlash. 
  • Third party/vendor AI controls: Companies often rely on external AI solutions without thoroughly vetting vendor practices, data security measures, or compliance with industry standards, creating hidden vulnerabilities in their operations. 

Modern AI governance frameworks such as NIST AI RMF and ISO/IEC 42001 will require these types of controls for comprehensive risk management. 

The Materiality Assessment Challenge 

When AI is deployed without auditor knowledge, stakeholders are forced into reactive assessment mode. The key questions become: 

  • Where has AI been used in processes affecting financial statements? 
  • What is the materiality of AI-generated or AI-influenced transactions? 
  • What additional audit procedures are necessary to validate AI outputs? 

This reactive approach is inherently risky and inefficient. With the support of management, auditors should be positioned ahead of their clients’ AI adoption, not behind it. 

Building an AI-Aware Audit Readiness Approach 

Organizations should implement comprehensive AI assessment protocols that include: 

Current-State Analysis 

Inventory all existing AI tools across the organization, including: 

  • System parameters and functionality. 
  • Documentation facilitating the audit of the AI tool or function being used. 
  • Data sources and quality controls. 
  • Output validation procedures. 
  • Human oversight mechanisms. 
  • Number of controls to ensure completeness and accuracy of input data. 
  • Expanded attack surface. 
  • Model positioning/data integrity. 
  • Role-based access and privilege controls. 
  • Audit trail/forensics. 

Future Roadmap Review 

Management must rationalize and align future AI implementation plans to proactively understand potential audit implications. This strategic approach allows for proper control design and testing methodology development before systems go live, which auditors will be looking for. Recommendation: Plan for data privacy impact assessments and cyber testing aligned with ISO/IEC 42001 and NIST AI RMF. 

Enhanced Risk Assessment 

Develop AI-specific risk assessment procedures internally that evaluate: 

  • Completeness and accuracy of AI training data. Which corporate function or group of stakeholders is best positioned to lead this effort? 
  • Appropriateness of AI model selection for financial processes. How are AI-enhanced operational processes subsequently feeding into and impacting financial reporting? 
  • Adequacy of human oversight and exception handling.  
  • Effectiveness of AI output validation controls. 
  • Vulnerability scanning. 
  • Incident response for AI-driven processes. 

Specialized Skill Development 

Accounting and IT teams need enhanced technical capabilities to evaluate AI systems effectively. It’s not enough for staff to deploy AI in line with corporate policy; they must be capable of ongoing maintenance, monitoring, and testing of AI system performance. This includes understanding different AI model types, their appropriate applications, their inherent limitations, and cybersecurity literacy for audit and AI teams. 

As audit scrutiny of AI increases, additional third-party audit support will likely also be needed to augment the lack of internal capabilities in this space. 

Summary Action Plan 

Management should codify and communicate a high-level summary AI action plan that steers the governance and implementation of AI systems. This plan should address: 

  • Adoption of formal AI governance frameworks. 
  • Integration of cybersecurity risk management. 
  • Expanded documentation to include both AI and security controls. 
  • Cross-functional collaboration recommendations. 

This approach can also support audit efficacy and employee satisfaction. 

A Call to Action: Embrace AI Advisory 

AI transformation of financial statement reporting requires organizations to collaborate with strategic partners who understand both the opportunities and risks of AI adoption in financial processes.  
 
Ready to transform your firm’s AI audit capabilities? CrossCountry Consulting helps management teams and their auditors understand, assess, and validate AI implementations in financial reporting. For the strategic insights and technical expertise needed to maintain audit quality while enabling innovation, contact CrossCountry Consulting today. 

Connect with an expert

Olivier Bouwer

Accounting Advisory

See Bio

Contributing authors

Cameron Over