A Hacker News thread last week titled “Why I finally switched from ChatGPT to Claude” pulled 400+ comments, and roughly half were people describing their own migrations in granular detail. Which custom instructions they rebuilt. Which prompt patterns broke. How Claude handles code refactoring differently, and whether that’s actually better or just unfamiliar. One commenter described spending an entire Saturday rebuilding their ChatGPT workflow in Claude, prompt by prompt, like moving apartments and having to figure out where everything goes in the new kitchen.

And that same week, Sensor Tower reported ChatGPT uninstalls surged 295% following the Pentagon-Anthropic fallout. Because nothing sells a product quite like the US government trying to destroy its parent company. And Claude hit #2 on the App Store almost immediately.

So a lot of people are switching. And I get it. Claude feels more deliberate with long documents, better at structured reasoning, less eager to please. I moved most of my own work to Claude months ago. But watching this mass migration play out, the thing that keeps nagging at me is that nobody seems to notice they’re constructing the exact same dependency with different branding.

The loyalty loop

What happens after you switch is predictable. You rebuild custom instructions. You learn the new model’s quirks, figure out which prompts produce clean output and which ones misfire in ways ChatGPT never did. You explore Claude’s strengths, the long-form analysis, the careful step-by-step reasoning, that particular tone that feels like talking to a thoughtful colleague instead of an eager intern, and you discover its blind spots too, the weird refusals, the occasional overcautious hedging on questions GPT would have just answered. You build habits around artifacts, Projects, Claude’s way of structuring responses. And your muscle memory rewires itself around a new set of interface patterns.

After a few weeks of this settling-in process, switching again would mean throwing away all that accumulated adaptation, which is exactly the kind of soft lock-in that platform companies have relied on since someone at Microsoft first realized that making Word documents slightly incompatible with everything else was a business strategy, not a technical limitation. OpenAI knows this. Anthropic does too. Custom GPTs, memory features, Claude Projects, ecosystem integrations that only work inside their walls. Every quarter the product gets slightly stickier. Nobody advertises this as lock-in, but that is what it is.

About that 295%

The uninstall surge was reactive. A valid protest, sure. But the people downloading Claude that same week were building the same single-provider dependency they just escaped.

You can just use both

Nobody in those HN threads raised the most obvious point: you don’t actually have to choose one model. BYOK architecture means the AI is a swappable component underneath whatever tool you use, not the tool itself.

dassi supports Claude, GPT, Gemini, DeepSeek, and others through API keys. But the part that keeps surprising people is simpler: you can log in with your existing ChatGPT Plus or Pro subscription directly, no API key required. If you’re already paying OpenAI $20 a month, that same subscription now powers a browser agent that can see your screen, navigate web apps, and act on what you’re looking at without any extra cost or developer account setup.

And if you want Claude for deep research and GPT for quick email drafts? Use both. Because the browser agent doesn’t care which model runs underneath. So your workflow stays identical.

The model isn’t the moat

Ben Thompson has written for years about how the most defensible tech products are platforms, not the underlying technology they run on. The model companies want you to believe the AI is the product. But for anyone doing actual work in a browser, the model is closer to a database engine: essential infrastructure, something you should be able to swap without remodeling the house.

Browser agents that separate the interface from the intelligence give you something no standalone chatbot offers: the ability to switch models without switching tools, without rebuilding habits, without losing a damn thing.

Those HN migration guides are genuine efforts by people trying to improve their setup, and I respect that. But next time a model provider does something that makes you uncomfortable, the right response should not involve a weekend-long migration project. It should be changing one dropdown.