Jul 24, 2025 In March 2024, Turnkey Trading Partners was one of the first to alert CFTC registrants to a developing regulatory focus: the use of artificial intelligence within futures firms. At the time, NFA had only begun raising general questions during routine exams—often exploratory and informal in nature—about whether firms were using AI in any part of their business. Since then, the regulatory tone has shifted. What began as casual inquiry has evolved into structured, follow-up questioning when firms indicate they are using or even exploring AI-based systems. Over the past 18 months, we’ve seen firsthand how a simple “yes” to using AI can trigger deeper reviews of supervisory procedures, third-party oversight, and documentation of model governance. Turnkey’s early warning anticipated what’s now clear: regulators no longer view AI as experimental. Whether used in trading, marketing, or infrastructure, it falls squarely within the scope of supervisory oversight. The broader regulatory ecosystem now reflects this shift. The CFTC’s December 2024 advisory made clear that AI tools must be supervised like any other trading system. The NFA followed with a proposed Interpretive Notice addressing controls around automated systems. Meanwhile, global and domestic bodies like IOSCO, FINRA, and the U.S. Senate have called for greater transparency, explainability, and oversight. The message is consistent: firms cannot outsource accountability. Whether you’re a CTA optimizing signals with machine learning, an IB using AI for routing or client analytics, or a CPO exploring model-driven allocations, regulators now expect robust governance frameworks. As we said in March 2024, it’s not a question of if oversight will come, but when. That time has arrived. Firms should review how AI touches their business today, revisit their supervisory structures, and ensure that appropriate documentation, controls, and personnel oversight are in place. If you wait until the rules are finalized, it may already be too late. CFTC and NFA Compliance Rules Every Firm Should Know Many CTAs and IBs are using machine learning for strategy development or trade execution. But as adoption grows, so does regulatory attention. While the CFTC has not yet passed AI-specific rules, recent advisories, statements, and parallel moves by agencies like FINRA, IOSCO, and the U.S. Senate point to a clear direction: AI use in trading and compliance must be supervised, documented, and controlled like any other critical system. This article outlines that regulatory direction, highlighting: The CFTC’s December 2024 advisory and its implications under existing rules Global expectations set by IOSCO on AI governance FINRA’s practical guidance on explainability, bias, and model risk Congressional scrutiny of hedge fund AI use and its call for new guardrails NFA’s proposed Interpretive Notice addressing automated systems and associated risk controls If you are an NFA member using or evaluating AI tools, these developments signal where policy is headed and what expectations already exist under today’s rules. The CFTC’s Position: AI Must Be Supervised Like Any Other Trading System In December 2024, the Commodity Futures Trading Commission issued a formal advisory warning that AI systems used in trading, compliance, or risk management could create serious regulatory vulnerabilities if not properly governed. Key concerns included: Lack of explainability: Black-box models that cannot be audited violate the transparency expectations embedded in CFTC rules. Failure to supervise: Under CFTC Regulation 166.3, firms must diligently supervise all activities of their associated persons. That includes algorithmic systems, even if they are autonomous. Third-party risk: Using an outside vendor does not transfer liability. Firms are still responsible for oversight and validation of any third-party AI solutions. This should also be considered within the context of your firm’s third party service provider policies. Before retiring, CFTC Chair Rostin Behnam publicly cautioned that unchecked use of AI could heighten systemic risk and destabilize markets. Commissioner Kristin Johnson took this a step further, calling for a coordinated regulatory response to AI’s growing role in financial markets. She championed the creation of a CFTC AI Fraud and Market Manipulation Task Force, proposed an inter-agency AI policy initiative, and warned that firms deploying latency-sensitive AI systems without clear supervision risked exacerbating market fragility. In her May 2025 retirement statement, Johnson stressed the need for “well-informed, research-based, data-driven regulatory solutions” to ensure market resilience in the face of increasingly complex technologies. Example: If a CTA deploys a model that dynamically adjusts positions based on real-time sentiment data, and that model causes erratic swings or breaches position limits, the firm can’t point to the software vendor. Supervision failures are still enforceable under 166.3. Takeaway: Document everything. Test before deployment. Assign a clear owner for oversight. And prepare for questions from regulators. IOSCO’s Global View: AI Governance is a Core Obligation The International Organization of Securities Commissions (IOSCO) issued Final Report FR06/2021 identifying six core measures regulators expect: Senior management should sign off on AI deployment and updates Firms must test and monitor models continuously Compliance staff must be able to understand and challenge model behavior Vendors must be vetted and monitored Firms should disclose meaningful information about AI use to clients Data inputs must be clean, relevant, and free from systemic bias This guidance is not binding, but it reflects the direction of global supervisory thinking—and likely the CFTC’s next steps. Example: An IB uses an algo-wheel to route orders through various brokers. If the model consistently favors one venue based on flawed data, that’s not just a performance issue, it could be viewed as deceptive execution or soft-dollar abuse. Takeaway: Treat AI the same way you would any core business function: with policies, controls, and audit trails. FINRA’s Focus: Explainability, Bias, and Model Risk While FINRA rules do not apply directly to CFTC registrants, their detailed AI guidance (FINRA, “Artificial Intelligence (AI) in the Securities Industry,” 2020; updated 2023) offers critical insights: Black-box models may violate supervisory Rule 3110 if outputs can’t be understood Firms must monitor for data bias, both demographic and operational Privacy, cybersecurity, and data sourcing must be addressed as part of AI governance Example: A CTA pulls market data from social platforms to detect trend shifts. If the data is manipulated (e.g., deepfakes or coordinated bots), relying on it without verification could expose the firm to both compliance and trading risk. Takeaway: You need explainability. Not just for internal review, but to justify decisions if a regulator or client asks. Senate Report: AI is a Systemic Risk Accelerator In June 2024, the U.S. Senate Homeland Security Committee issued a report titled “Hedge Fund Use of AI” (June 11, 2024). It flagged: Inconsistent disclosure and oversight practices across major funds Lack of baseline rules for AI in trading Regulatory gaps between the SEC, CFTC, and FSOC The potential for AI-driven herding behavior or flash crash amplification The Committee explicitly recommended that the CFTC and SEC create standardized definitions, mandatory testing, risk tiers, and disclosure obligations for AI models used in trading. Example: During the May 2023 AI-generated fake news incident (Pentagon explosion hoax), market-moving trades were executed based on false data. Regulators view this as a preview of how generative AI could destabilize markets. Takeaway: Expect the CFTC to move toward mandatory testing, disclosures, and eventually, AI system audits. NFA’s Proposed Interpretive Notice: Preparing for Broader Oversight In a draft Interpretive Notice issued in mid-2024, the National Futures Association proposed updated guidance on automated trading systems. While not AI-specific, it outlines a structured approach to risk controls and governance for any algorithmic tools used in execution, allocation, or messaging. The proposal emphasizes: Periodic review of automated systems and associated safeguards Real-time alerts for operational or execution anomalies A designated supervisory contact responsible for the systems Takeaway: NFA members using AI-based or automated trading tools should begin aligning their supervisory procedures with this proposed framework. Conclusion: If You Use AI, Prepare for Scrutiny The rules may still be forming, but the expectations are already here. If you use AI, even for simple model-assisted decisions, you need governance, documentation, testing, and human oversight. From CFTC enforcement to Senate pressure—and now NFA policy—the signal is clear: AI isn’t a loophole—it’s a risk area. Get ahead of it now.