AI didn’t just test my trading idea. It built a forex system from scratch in minutes.
This series follows my use of Claude to build real analytical systems from public data.
Before you read the article below, a quick note on what the system actually does.
The CoT Accumulation System is a directional indicator. It reads publicly available government data to identify where institutional money — pension funds, endowments, sovereign wealth funds — is currently positioned across eight major currency pairs, and which direction they’re heading.
It doesn’t tell you when to enter a trade. It doesn’t give you a stop loss or a target. What it gives you is something more fundamental: a directional filter.
Each week, for each pair, the system tells you one of three things: the big money is accumulating (go LONG), the big money is distributing (go SHORT), or the signal is unclear (WAIT).
That filter sits on top of whatever trading methodology you already use. If you trade Smart Money Concepts, price action, supply and demand, technical analysis, Elliott Wave, or any other approach, this tells you whether the institutional wind is at your back or in your face before you take the trade.
Many retail traders lose because they trade against institutions without knowing it. This system makes the institutional direction visible.
The full system — documentation, working pipeline, interactive dashboard, and sample data — is available here: The CoT Accumulation System
The article below tells the story of how it was built.
There’s a moment in every AI conversation where you realise the thing you’re talking to isn’t just answering questions. It’s actually thinking with you.
Mine happened a few months ago, on a Tuesday evening.
I’d been working on a trading idea. I wanted to know how publicly available government data can tell me what institutional money managers were doing in the currency markets. The weekly data has been published since 13 June 2006. Could I use it to gain insight into what the “smart money” is doing? The dataset contains 44,872 lines. I didn’t have the statistical muscle to test ideas and hypotheses manually.
So I asked Claude to help.
I asked it to analyse a specific filter I was considering for my trading system. The rules were: don’t act on a signal unless it persists for at least two consecutive weeks. Was two weeks the right threshold? Or was it too strict or too loose?
Within two minutes, Claude had processed the entire historical dataset. It computed the full distribution of signal streaks across eight currency pairs. It then delivered a clear picture: 55% of all positive signals are single-week noise that reverses the following week. A two-week streak is meaningfully different. It represents institutional commitment rather than a blip.
The two-week filter is the right one because the data is published weekly with a built-in lag. You only know you have a two-week streak in the third week. That’s the earliest point you can act on the signal. If you wait for three weeks, you’ll be acting on week four. By then, the move will often already be underway without you.
All done in under two minutes.
A design decision was resolved with statistical precision and practical logic. I couldn’t have assembled this in a week.
That might have been the moment I stopped thinking of AI just as a tool that writes but also as a collaborator that can analyse and stress-test decisions against data in real time.
Over time, I refined my queries with Claude, shaping them into a position-trading system for major currency pairs using only public data.
This is the story of that process. Rather than focusing solely on the trading system itself, I will guide you through bringing domain knowledge to AI and building something concrete through structured conversation. To bridge the conceptual framework to the details, it’s helpful to first understand the data source underpinning the entire approach.
It is one of the most powerful and least understood things you can do with AI today.
The Data Few Traders Look At
Every Friday afternoon (Saturday morning in South Australia), the US government publishes a report. It tells you exactly what institutional money managers are doing in the financial markets. You see the actual positions—how many contracts they hold, in which direction, and how that changed from the previous week.
It’s called the Traders in Financial Futures report, published by the Commodity Futures Trading Commission. It is free and available to anyone with an internet connection.
Most retail traders have never heard of it. Those who have tend to glance at it. Then they go back to watching price charts. They consider it background stuff: too slow, too delayed, and too institutional to be useful for real minute-by-minute trading.
I thought so too, previously.
The report breaks down futures positioning across four groups: Asset Managers (pension funds, endowments, and sovereign wealth funds). Leveraged Funds (hedge funds and proprietary trading firms). Dealers (banks and market-makers that facilitate trades). Retail (a mixed category, including traders like us, a.k.a. “dumb money”).
Each group behaves differently. And those behavioural differences, it turns out, contain a remarkable amount of information, if you know how to read them together.
The problem is “reading them together.” This means processing tens of thousands of rows of data across eight currency pairs, four participant groups, several timeframes, and years of history. It involves computing percentile rankings, tracking streaks, classifying phases, and cross-referencing signals.
No individual trader is going to spend the entire weekend doing that in a spreadsheet every Saturday morning.
But with AI, an individual trader can do it in just minutes.
How (and what) I Actually Built
I want to describe the system clearly and conceptually. The method (how) is more important than the details (what), so I’ll outline the main steps and structure behind it.
Our system watches one group above all others: the Asset Managers (AM). These are slow, deliberate institutions, such as pension funds and endowments. They build positions over weeks and months. They manage billions of dollars. When they shift direction, it’s not usually a speculative punt. It’s a view reviewed by committees, approved by risk managers, and sized to matter.
The system processes institutional trading positions and calculates a single score. This score compares current positions to recent historical levels. Think of it as a fuel gauge: near-empty signals institutions are very pessimistic about a currency; near-full signals strong optimism. The middle range shows the currency’s current place in the broader sentiment cycle.
Week-on-week changes indicate momentum and direction.
However, what truly sets the system apart is the unique structure and layers that surround this core process. These layers turn the raw score into actionable signals.
The system uses a lifecycle framework to sort each currency pair into a market phase—such as compressed pessimism, accumulation, trending, crowding, or distribution. Each phase tells you what to do: wait, prepare, enter, hold, or exit. This framework ensures decision-making is systematic and consistent.
A cross-referencing step combines data from all four participant groups. The system assigns each group an analytical role: Asset Managers provide the main signal; one group confirms or signals divergences; another shows market pressures; the last serves as a contrarian indicator. This design improves the reliability of the system’s signals.
The system bridges weekly data and daily price action by using positioning data from key groups to signal which type of price-entry setup is expected.
When the market’s infrastructure is under strain, the system expects a specific kind of price pattern before the real move begins. When the infrastructure is clean, it expects a different pattern. This bridges the gap between positioning analysts and price action traders.
Before implementing rules, thresholds, filters, and phase boundaries, the system stress-tests each one against historical data. The two-week signal streak filter described earlier sets the example for how every design choice is validated statistically.
Each week, the system automatically collects and processes the most recent trading data. It then updates a dashboard that delivers actionable insights by categorising each currency pair’s current market phase and recommending specific actions. All recommendations are derived from rigorous statistical analysis, removing individual discretion from the procedure.
How AI Actually Built This
This is the part I want to spend time on, because it’s the part that applies to anyone reading this, whether or not you care about currency markets.
I didn’t start with a finished system and ask Claude to code it. I began with a set of questions and built the system through conversation. Each stage added a layer. I tested each layer against data before moving forward.
Here’s what that process actually looked like.
Starting with raw data
The first conversation was mostly mechanical. I uploaded a CSV file with over 40,000 rows. I asked Claude to extract the eight currency pairs I was watching.
Then, I asked Claude to compute the positioning gauge for all four participant groups and track how positions changed week over week.
Claude created a working Python script in a single session. It included error handling, data validation, and formatted output. After several rounds of testing and debugging, it ran as intended.
That session alone would have saved me months of work and mistakes. But the real value came later.
Discovering things I didn’t know to look for
The cross-referencing framework, where each participant group plays a defined analytical role, didn’t come from a textbook. It emerged from our conversation.
I asked Claude to analyse the relationship between the two groups at the cycle extremes. When one group is positioned at maximum pessimism, what does the other group’s positioning look like? What does that combination imply about how price needs to behave for the real move to begin?
The answer was a framework I hadn’t thought about or seen anywhere previously. It explained a pattern I’d noticed intuitively but could never have articulated on my own. Claude “thought of it” based on the data that I uploaded.
This is the thing that’s hard to communicate about working with AI on analytical problems. You don’t just get answers to questions you already have. You get answers to questions you didn’t know to ask previously because AI can see patterns across a dataset that’s too large for humans to hold in our heads.
Validating every decision
The streak analysis I described at the start became a template for subsequent design decisions in the system.
Each time I proposed a rule, such as a threshold, a filter, or a phase boundary, I would ask Claude to test it against the historical data. How often does this condition happen? What happens afterwards? What percentage of signals are noise? What would change if I made it tighter? Looser?
Every answer returned numbers, distributions, and edge cases. Within minutes.
One conversation that sticks with me: I was unsure whether a particular signal was as rare and reliable as it appeared. Claude analysed 20 years of data across all eight pairs and found that the signal had fired only 17 times, BUT with an 88% directional accuracy rate. That’s the kind of evidence that turns a hunch into conviction. And it took minutes.
Assembling the production system
Once all components had been validated individually, I asked Claude to integrate them into a single weekly pipeline. The resulting script takes the government report as input and produces a structured data file, an enriched analysis with multiple timeframes, a printable PDF report, and an interactive dashboard.
The entire pipeline runs in about 20 seconds. Every component is generated, tested, and refined in my conversation with Claude.
What This Is Really About
It’s not about signals, and I’m not promising any kind of returns. Trading is actually a lot more complicated than that. I’m not suggesting that AI can replace the discipline, experience, and psychological resilience that trading demands.
What I am saying is that the combination of domain knowledge, the right questions, and a capable AI collaborator can produce analytical systems that would have been impractical for a solo operator to build even two years ago.
The data was always available and free. The formulas are well documented. The building blocks are accessible to all.
What wasn’t possible until now was for one person to process all of it, cross-reference it, validate every design decision against two decades of history, and deliver a production-ready system in days rather than months or years.
That’s what AI changes. The capacity to do something rigorous and complex in almost no time at all.
What’s Next
This is the first piece in a series I’m calling AI & Markets. The premise is simple: take publicly available data like government reports, regulatory filings, congressional trading disclosures, earnings data, even weather forecasts and social media sentiment, and use AI to turn it into something you can actually trade on. Each instalment starts with a question, works through the data in conversation with Claude, and ends with a framework you can use.
The trading system I’ve described here is real and live. I have been using it for more than a month now. If you want the complete documentation that includes every threshold, every phase definition, every entry and exit rule, the statistical evidence behind every design decision, and the full workflow, it’s available here as a complete guide and system:
This is more than just a PDF explaining the system in detail. It also includes the Python and Java code that you can upload to Claude and run it with the latest TFF CoT data download. Just prompt “Run the pipeline”, and Claude will build the entire dashboard in seconds.
It’s that simple.
Future pieces in this series will explore other datasets and other markets. The key method stays the same: domain knowledge meets AI, rigorous questions meet statistical answers, and the result is a system you can actually use.
If you want to see what AI becomes when you use it to solve real problems, subscribe.
Francis Tan writes The Intelligent Playbook — an AI literacy newsletter for people who want to do real work with these tools, not just talk about them.

