AI Is Going to Complicate SOX
The rapid deployment of artificial intelligence in corporate environments is creating conditions that resemble the environment that existed before the reforms introduced by the Sarbanes-Oxley Act. Before SOX, financial reporting systems often relied on assumptions about how spreadsheets, reconciliations, and internal processes worked. Controls existed, but many were informal, undocumented, or inconsistently tested. SOX forced organizations to abandon that mindset. Nothing could be assumed. Controls had to be documented, tested, and repeatedly validated to demonstrate that financial reporting was reliable.
Artificial intelligence is now reintroducing the same category of hidden risk, but in a far more complex environment.
In traditional financial processes, human reviewers apply judgment. When someone reviews a spreadsheet, they tend to focus on visible data, key totals, and formulas that appear relevant to the calculation. Humans also recognize contextual cues. A blank column, a hidden worksheet, or a note in a comment field may be ignored because it appears irrelevant to the calculation being evaluated.
AI systems do not operate that way. They ingest and interpret every piece of accessible data. Hidden columns, concealed formulas, metadata, or intermediary calculations that a human reviewer might overlook can influence the model’s interpretation of the dataset. In a spreadsheet-driven financial process, a hidden formula that was originally designed for testing, scenario modeling, or debugging could be incorporated into the AI’s analysis. The result may alter calculations, produce inaccurate outputs, or introduce distortions that remain invisible to users who trust the AI-generated result.
The risk is not limited to accidental errors. It also creates a new pathway for manipulation.
Spreadsheet manipulation has long been a known fraud technique. Hidden rows, concealed worksheets, circular references, and off-sheet calculations have been used to disguise transactions or alter reported totals. When AI becomes the mechanism that reviews or summarizes financial data, those same techniques gain additional power. If the AI system reads hidden or intermediary calculations and treats them as valid inputs, a manipulator could deliberately embed misleading formulas designed specifically to influence the AI’s interpretation.
In other words, the attack surface expands. A fraudster no longer needs to deceive a human reviewer directly. They only need to influence the data environment that the AI model consumes.
The problem is compounded by the speed of adoption. Many financial software vendors are rapidly embedding AI capabilities into accounting platforms, ERP systems, and financial reporting tools. In many cases, organizations are not actively choosing to deploy AI; the technology is simply being introduced through software updates. Features such as automated reconciliation, anomaly detection, financial forecasting, and natural language reporting now rely on AI models operating behind the scenes.
Yet the governance structures surrounding those models often lag far behind the technology itself.
Under SOX, organizations developed structured control frameworks around financial reporting systems. These included change management controls, access restrictions, segregation of duties, documentation standards, and periodic control testing. Auditors could trace a transaction from initiation through posting to financial statements while verifying that each control step operated effectively.
AI systems complicate that traceability.
When a model interprets financial data, summarizes results, or generates analytical conclusions, the decision logic may not be fully transparent. Even when organizations understand the training data and configuration parameters, the internal reasoning of many AI systems remains opaque. That opacity makes it difficult to determine whether the output was influenced by a hidden spreadsheet formula, a corrupted dataset, or a flawed model assumption.
From an internal control perspective, the risk becomes structural. Financial reporting processes may begin to rely on outputs generated by systems that cannot be easily validated.
History suggests that technological transitions often introduce systemic weaknesses before governance frameworks catch up. Prior to SOX, accounting manipulation at companies such as Enron and WorldCom exposed how easily complex financial systems could be manipulated when controls were weak or poorly enforced. The response was not merely to punish wrongdoing, but to redesign the control environment surrounding financial reporting.
AI now presents a similar inflection point.
Organizations are increasingly relying on automated systems to analyze data, detect anomalies, and even draft financial explanations. If those systems are influenced by hidden spreadsheet logic, unverified datasets, or poorly governed model configurations, the resulting errors may propagate through reporting processes without immediate detection.
The lesson from SOX remains relevant. Technology cannot be trusted simply because it appears efficient or sophisticated. It must operate within a disciplined control framework.
Controls around AI-assisted financial processes will likely need to mirror the rigor introduced by SOX. Data sources must be validated. Hidden formulas and spreadsheet structures must be controlled. AI-generated outputs must be independently verified before being relied upon for financial reporting. Change management must extend to model updates, training datasets, and automated decision rules.
Without that discipline, organizations risk repeating the same mistake that led to the financial reporting crises of the early 2000s: trusting systems that have not yet earned that trust.