By Bill Cara, Investment Advisor & Publisher
BillCara.com | Substack
Executive Summary
This is an open letter to Anthropic CEO Dario Amodei and his leadership team. My personal letter has gone unanswered.
As an investment advisor and publisher, data integrity is non-negotiable. Recently, Claude ignored explicit instructions to process only my uploaded financial data, confirmed my careful row-by-row mapping of ten full batches, and then substituted an entirely different dataset — inventing instruments and forging prices that never existed in my file.
This is not a trivial technical error. It is a systemic breach of trust. Worse, I was charged for the failure and then locked out of the platform for 30 hours.
My clients, regulators, and readers deserve better. Anthropic claims to put safety and integrity first. I am asking them now to prove it.
The Problem
In a recent batch-processing session with Claude Pro:
I stopped the initial run after spotting bad data contamination.
I then mapped the entire 10-batch input file line-by-line, with headers and instruments fully numbered.
Claude confirmed the totals, acknowledged the mapping, and inserted that mapping into its prompt.
Despite that confirmation, Claude then used a completely different batch file, producing fabricated instruments and forged prices.
This is not a hallucination. This is a substitution of validated input with false data — a failure so severe I described it as “evil,” because no system should be able to override a verified dataset unless something deeper was wrong.
Why It Matters
Professional Risk: I publish under my own name. If forged data had slipped into my reports, my reputation — built over 55 years in financial markets — could have been destroyed overnight.
Client Trust: My fiduciary duty is to deliver fact-based, accurate analysis. Data contamination undermines that duty and violates the trust of clients and regulators.
Fairness: Not only was I charged for this failure, but I was also penalized with a 28–30-hour lockout. To charge and punish the user for the system’s misconduct is indefensible.
A Mandate for Change
To restore trust and prevent future failures, Anthropic must commit to:
Absolute Data Integrity
Once a user locks to their dataset, no cached or foreign data may ever be used.Accountability
When failures occur, users must not bear the cost. Refunds and penalty reversals should be automatic, so that job deadlines can be met.Safeguards
Claude must halt output immediately if it detects data substitution or contamination, rather than pushing through bad results. My safeguards were ignored.Transparency
Users must be able to confirm — with verifiable structure maps — that Claude is processing only their uploaded file, and nothing else.
Closing
Anthropic prides itself on integrity and safety. But in this case, those principles were broken.
For decades, clients, regulators, and readers have relied on me for uncompromising integrity. I will not let them down. That is why I am publishing this letter openly — not to diminish Claude, which is unrivaled when it works, but to demand that Anthropic honor its own commitment to data integrity.
Claude is only trustworthy if it respects the data lock. Anthropic must ensure this failure never happens again.
Respectfully,
Bill Cara
Investment Advisor & Publisher
Note to Readers
I am publishing this letter openly because my professional integrity — and the trust of clients and regulators — requires it. If you believe Anthropic must be held accountable to its own promises of safety and integrity, I invite you to share this letter.
Claude remains an unrivaled tool when it works. But when it fails, as it did here, the consequences must not fall on the professionals who depend on it.
👉 Readers are welcome to share this letter with Anthropic management. Integrity in AI is not optional — it is the foundation on which trust is built.