I Generated Production-Ready Kubernetes Configs in 30 Seconds (Here's How You Can Too)
The 5-letter framework that turned my AI from a glorified search engine into a senior DevOps engineer
I’ve seen it hundreds of times. A DevOps engineer opens ChatGPT, types “write a Kubernetes deployment,” gets a basic YAML file, and then spends the next hour manually fixing security issues, adding resource limits, and making it production-ready.
Sound familiar?
Here’s the thing: The AI isn’t the problem. Your prompt is.
After working with AI tools for infrastructure automation for the past two years, I’ve discovered that the difference between getting generic, unusable output and getting production-ready code comes down to one thing: how you ask for it.
Today, I want to share the exact framework that transformed my DevOps workflow and helped me generate infrastructure code that I trust to deploy.
The Problem with How We Prompt AI
Most technical professionals treat AI like Google Search. We throw in a few keywords and hope for the best:
“Create a Dockerfile for Python”
“Write a backup script”
“Make a CI/CD pipeline”
But here’s what we’re doing: We’re asking a highly sophisticated AI assistant to read our minds. And when it inevitably fails to deliver exactly what we need, we blame the AI.
The reality? AI isn’t mind-reading. It’s pattern matching. And the patterns it matches are entirely dependent on the information you provide.
Enter the C.R.A.F.T. Framework
After analyzing hundreds of successful AI interactions for DevOps tasks, I developed a simple framework that consistently delivers professional-grade results. I call it C.R.A.F.T:
Context: Provide the background and current situation
Role: Assign a job title or persona to the AI
Action: What specific thing do you want the AI to do
Format: What should the final output look like
Tone: What style should the AI use in its response
Let me show you how dramatically this changes your results.
The Before and After That Will Blow Your Mind
❌ The Bad Prompt:
"Make a Kubernetes deployment for Nginx."
✅ The Good Prompt (Using C.R.A.F.T.):
(Role) Act as a certified Kubernetes administrator.
(Context) I have a standard Kubernetes cluster on GKE. I need to deploy
a simple Nginx web server that will serve as a reverse proxy for a
Node.js application running on port 8080.
(Action) Generate the YAML for a Kubernetes Deployment and a Service.
(Format) The Deployment should use the official nginx:latest image,
have 3 replicas, and include readiness and liveness probes. The Service
should be of type LoadBalancer and expose port 80.
(Tone) Add comments to the YAML explaining what each major section does.
The difference in output quality is night and day.
The first prompt gives you a basic deployment that’s missing:
Resource limits
Health checks
Security contexts
Proper labeling
Service configuration
Any real-world considerations
The second prompt delivers a complete, production-ready configuration with security best practices, proper resource management, and comprehensive documentation.
Why Context Is Your Secret Weapon
The Context component is where most people fail, but it’s also where you can create the biggest impact. Here’s what game-changing context looks like:
🎯 Include Your “Why”
Instead of: “Create a firewall rule”
Try: “I need to open port 5432 to allow our new analytics service to connect to the production PostgreSQL database. Security is critical.”
🔧 Specify Your Tech Stack
Cloud Provider: AWS
CI/CD System: GitHub Actions
IaC Tools: Terraform v1.5
Runtime: Python 3.11, Node.js 18
📋 Define Your Constraints
“Must run as non-root user”
“All S3 buckets need encryption enabled”
“Memory-efficient for small container instances”
“Follow PEP 8 style guidelines”
📊 Show Data Structures
If you’re working with JSON, YAML, or databases, show the AI exactly what format you’re dealing with.
The Role Revolution
Here’s something most people don’t realize: AI models have been trained on millions of examples of how different professionals write code.
When you tell the AI to “Act as a Senior Site Reliability Engineer,” you’re not just giving it a title — you’re activating an entire knowledge pattern of how SREs think about:
Security
Scalability
Monitoring
Error handling
Best practices
Compare these two Dockerfile requests:
Generic: “Create a Dockerfile for a Python app”
Role-Based: “Act as a Senior Site Reliability Engineer. Create a Dockerfile for a production Python web application.”
The second one automatically includes:
Multi-stage builds
Non-root user configuration
Optimized image layers
Security scanning considerations
Production-ready configurations
Action Words That Work
Stop saying “help me with” or “can you.” Start using precise action verbs:
Generate (for new code/configs)
Refactor (for improving existing code)
Debug (for troubleshooting)
Explain (for understanding)
Optimize (for performance improvements)
Compare (for evaluating options)
Format: Get Exactly What You Need
The AI can output in virtually any format, but you have to ask:
“Provide as numbered bash commands”
“Output as Terraform HCL”
“Format as a Markdown table”
“Generate both Dockerfile and docker-compose.yml”
“Include comprehensive comments”
Real-World Results
Since implementing C.R.A.F.T., I’ve:
✅ Reduced my infrastructure code review cycles by 60%
✅ Generated production-ready Terraform modules in minutes instead of hours
✅ Created comprehensive CI/CD pipelines with proper error handling and security scanning
✅ Built monitoring dashboards that caught real issues
✅ Automated backup scripts that handle edge cases I didn’t even think of
More importantly, I trust the code that comes out of these prompts enough to deploy it (after proper testing, of course).
Your Next Steps
Start with Context: Next time you prompt an AI, spend 30 seconds providing proper context. Include your environment, constraints, and the “why” behind your request.
Assign Roles: Always tell the AI what kind of professional perspective you want. “Act as a DevOps engineer” vs “Act as a security specialist” will give you dramatically different outputs.
Be Specific: Replace vague requests with precise actions and format requirements.
Iterate: Don’t settle for the first output. Ask follow-up questions, request modifications, and refine until it’s exactly what you need.
The Future Is Conversational Infrastructure
We’re moving from “Infrastructure as Code” to what I call “Infrastructure as Conversation.” The engineers who master this shift — who learn to direct AI effectively rather than just hoping for good results — will be the ones building the future.
The C.R.A.F.T. framework isn’t just about getting better AI outputs. It’s about fundamentally changing how you work. It’s about spending your time on architecture, strategy, and creative problem-solving, rather than wrestling with YAML syntax and boilerplate code.
This article is based on concepts from my book PromptOps: From YAML to AI — a comprehensive guide to leveraging AI for DevOps workflows. The book covers everything from basic prompt engineering to building team-wide AI-assisted practices, with real-world examples for Kubernetes, CI/CD, cloud infrastructure, and more.
Want to dive deeper? The full book includes:
Advanced prompt patterns for every DevOps domain
Team collaboration strategies for AI-assisted workflows
Security considerations and validation techniques
Case studies from real infrastructure migrations
A complete library of reusable prompt templates
Follow me for more insights on AI-driven DevOps practices, or connect with me to discuss how these techniques can transform your infrastructure workflows.