Swapping large language models (LLMs) is supposed to be easy, isn’t it? After all, if they all speak “natural language,” switching from GPT-4o to Claude or Gemini should be as simple as changing an API key… right?
In reality, each model interprets and responds to prompts differently, making the transition anything but seamless. Enterprise teams who treat model switching as a “plug-and-play” operation often grapple with unexpected regressions: broken outputs, ballooning token costs or shifts in reasoning quality.
This story explores the hidden complexities of cross-model migration, from tokenizer quirks and formatting preferences to response structures and context window performance. Based on hands-on comparisons and real-world tests, this guide unpacks what happens when you switch from OpenAI to Anthropic or Google’s Gemini and what your team needs to watch for.
Understanding Model Differences
Each AI model family has its own strengths and limitations. Some key aspects to consider include:
Tokenization variations—Different models use different tokenization strategies, which impact the input prompt length and its total associated cost.
Context window differences—Most flagship models allow a context window of 128K tokens; however, Gemini extends this to 1M and 2M tokens.
Instruction following – Reasoning models prefer simpler instructions, while chat-style models require clean and explicit instructions.
Formatting preferences – Some models prefer markdown while others prefer XML tags for formatting.
Model response structure—Each model has its own style of generating responses, which affects verbosity and factual accuracy. Some models perform better when allowed to “speak freely,” i.e., without adhering to an output structure, while others prefer JSON-like output structures. Interesting research shows the interplay between structured response generation and overall model performance.
Migrating from OpenAI to Anthropic
Imagine a real-world scenario where you’ve just benchmarked GPT-4o, and now your CTO wants to try Claude 3.5. Make sure to refer to the pointers below before making any decision:
Tokenization variations
All model providers pitch extremely competitive per-token costs. For example, this post shows how the tokenization costs for GPT-4 plummeted in just one year between 2023 and 2024. However, from a machine learning (ML) practitioner’s viewpoint, making model choices and decisions based on purported per-token costs can often be misleading.
A practical case study comparing GPT-4o and Sonnet 3.5 exposes the verbosity of Anthropic models’ tokenizers. In other words, the Anthropic tokenizer tends to break down the same text input into more tokens than OpenAI’s tokenizer.
Context window differences
Each model provider is pushing the boundaries to allow longer and longer input text prompts. However, different models may handle different prompt lengths differently. For example, Sonnet-3.5 offers a larger context window up to 200K tokens as compared to the 128K context window of GPT-4. Despite this, it is noticed that OpenAI’s GPT-4 is the most performant in handling contexts up to 32K, whereas Sonnet-3.5’s performance declines with increased prompts longer than 8K-16K tokens.
Moreover, there is evidence that different context lengths are treated differently within intra-family models by the LLM, i.e., better performance at short contexts and worse performance at longer contexts for the same given task. This means that replacing one model with another (either from the same or a different family) might result in unexpected performance deviations.
Formatting preferences
Unfortunately, even the current state-of-the-art LLMs are highly sensitive to minor prompt formatting. This means the presence or absence of formatting in the form of markdown and XML tags can highly vary the model performance on a given task.
Empirical results across multiple studies suggest that OpenAI models prefer markdownified prompts including sectional delimiters, emphasis, lists, etc. In contrast, Anthropic models prefer XML tags for delineating different parts of the input prompt. This nuance is commonly known to data scientists and there is ample discussion on the same in public forums (Has anyone found that using markdown in the prompt makes a difference?, Formatting plain text to markdown, Use XML tags to structure your prompts).
For more insights, check out the official best prompt engineering practices released by OpenAI and Anthropic, respectively.
Model response structure
OpenAI GPT-4o models are generally biased toward generating JSON-structured outputs. However, Anthropic models tend to adhere equally to the requested JSON or XML schema, as specified in the user prompt.However, imposing or relaxing the structures on models’ outputs is a model-dependent and empirically driven decision based on the underlying task. During a model migration phase, modifying the expected output structure would also entail slight adjustments in the post-processing of the generated responses.
Cross-model platforms and ecosystems
LLM switching is more complicated than it looks. Recognizing the challenge, major enterprises are increasingly focusing on providing solutions to tackle it. Companies like Google (Vertex AI), Microsoft (Azure AI Studio) and AWS (Bedrock) are actively investing in tools to support flexible model orchestration and robust prompt management.
For example, Google Cloud Next 2025 recently announced that Vertex AI allows users to work with more than 130 models by facilitating an expanded model garden, unified API access, and the new feature AutoSxS, which enables head-to-head comparisons of different model outputs by providing detailed insights into why one model’s output is better than the other.
Standardizing model and prompt methodologies
Migrating prompts across AI model families requires careful planning, testing and iteration. By understanding the nuances of each model and refining prompts accordingly, developers can ensure a smooth transition while maintaining output quality and efficiency.
ML practitioners must invest in robust evaluation frameworks, maintain documentation of model behaviors and collaborate closely with product teams to ensure the model outputs align with end-user expectations. Ultimately, standardizing and formalizing the model and prompt migration methodologies will equip teams to future-proof their applications, leverage best-in-class models as they emerge, and deliver users more reliable, context-aware, and cost-efficient AI experiences.