Building Resilient Systems
Session 13.5 · ~5 min read
The Abstraction Principle
Your pipeline has a fragility problem. If you wrote scripts that call Claude's API directly, with the model name hardcoded, the endpoint URL embedded, and the prompt text written inside the script, then changing anything requires editing every script. You have 12 scripts? That is 12 files to update when you switch models. 12 opportunities for error. 12 reasons your pipeline breaks on a Tuesday morning.
Abstraction solves this. Instead of 12 scripts calling Claude directly, 12 scripts call a function called generate_text(). That function, defined in one file, calls Claude. When you need to switch to Gemini, you change one function in one file. The 12 scripts never know the difference.
Abstraction: Putting a layer between your business logic (what your scripts do) and your tool implementation (which API they call). When the tool changes, you update the abstraction layer. Your business logic stays untouched. This is not over-engineering. It is insurance you will need.
Three Abstraction Layers
A resilient content pipeline has three abstraction layers. Each isolates a different type of change.
Wraps API call.
Model is configurable."] A2["search_web()
Wraps search API.
Provider is configurable."] A3["load_prompt()
Reads prompt from file.
Version is configurable."] end subgraph External["External APIs"] E1["Claude API"] E2["Gemini API"] E3["Tavily API"] E4["Google Search API"] end S1 --> A1 S1 --> A3 S2 --> A1 S2 --> A3 S3 --> A2 A1 -->|"Currently"| E1 A1 -.->|"If needed"| E2 A2 -->|"Currently"| E3 A2 -.->|"If needed"| E4 style Scripts fill:#222221,stroke:#6b8f71 style Abstraction fill:#222221,stroke:#c8a882 style External fill:#222221,stroke:#8a8478
Layer 1: API Abstraction
A single function wraps all AI generation. It accepts a prompt, a system prompt, and parameters (temperature, max tokens). Internally, it calls whatever API is configured. The calling script does not know or care which API is being used.
When you switch models, you change the implementation inside generate_text(). Nothing else changes. When you want to test a new model against your benchmarks (Session 13.2), you add it as an option in the abstraction layer and point the benchmark runner at it.
Layer 2: Prompt Abstraction
Prompts live in separate files, not hardcoded in scripts. A function called load_prompt() reads the appropriate prompt file based on the task type and the active model version. This lets you maintain model-specific prompt variants without duplicating scripts.
| Without Abstraction | With Abstraction |
|---|---|
| Prompt text inside script | Prompt in prompts/article_v3.txt |
| Model name hardcoded | Model name in config.env |
| API endpoint in every script | Endpoint in abstraction function |
| Temperature set per script | Default temperature in config, overridable per call |
| Changing model = editing 12 files | Changing model = editing 1 config value |
Layer 3: Configuration Abstraction
All settings live in a configuration file: model name, API keys (via .env), temperature defaults, max token limits, output directories, and logging preferences. Scripts read from this configuration at runtime. No magic numbers. No hardcoded paths. Everything that might change is in one place.
The Refactoring Process
If you already have scripts with hardcoded dependencies, refactoring follows a clear sequence:
- Identify all hardcoded values. Search your scripts for model names, API endpoints, prompt text, file paths, and configuration values.
- Create the abstraction layer. Write wrapper functions for API calls. Create a configuration file. Create a prompt directory.
- Migrate one script at a time. Do not refactor everything at once. Move one script to the new architecture. Test it. Confirm identical output. Move the next.
- Test with a model switch. After all scripts are migrated, change the model in configuration and run your benchmark suite. If everything works, your abstraction is solid.
The Payoff
The first time a model update breaks a direct API call and your abstracted pipeline keeps running because you only needed to change one function, the investment pays for itself. The second time, you wonder why anyone builds pipelines any other way.
Organizations using LLM-agnostic architectures report up to 40% less downtime and 30% in cost savings through the ability to switch providers based on price, performance, or availability. The numbers apply at enterprise scale, but the principle applies at any scale: flexibility is cheaper than lock-in.
Further Reading
- LLM Agnostic AI: Why the Smartest Enterprises Are Not Betting on a Single Model, Unframe AI
- What Is an LLM Agnostic Approach to AI Implementation?, Quiq
- How to Build Resilient Agentic AI Pipelines in a World of Change, GeekFence
- Why LLM Agnostic Solutions Are the Future of Dev Tools, Pieces
Assignment
Audit your scripts for hardcoded dependencies: model names, API endpoints, prompt text inside code, file paths, and configuration values. List every instance. Then refactor at least one script to use abstraction: create a generate_text() wrapper function, move prompts to external files, and put configuration values in a config file. Test that the refactored version produces identical output to the original. Document the before/after structure.