Three Days, Two Developers: How AI Pair Programming Transformed Good Code into Excellence

Table of Contents

Three Days, Two Developers: How AI Pair Programming Transformed Good Code into Excellence

TL;DR:
In three days, an AI-assisted workflow produced a Terraform Lambda module with 6× test coverage, full ISO 27001 compliance, and zero post-deployment findings - not because AI coded faster, but because it enabled structured collaboration, better documentation, and disciplined review.

lambda-architecture.jpg

The Hook: Same Time, Different Outcome

Three days. That’s how long it took to build our Lambda module-October 31 at 10:30 AM to November 2 at 6:00 PM. Minus five hours with friends because, well, life happens even during exciting projects.

Here’s the thing: I could have built a Lambda module in those same three days working alone. I’ve been writing infrastructure code for over two decades. I know Terraform. I know AWS. I definitely know how to deploy a Lambda function.

But I wouldn’t have built this module.

The difference started with a Reddit post. A Claude Code power user shared their experience after six months of AI-assisted development. Their key insight wasn’t about speed-it was about structured collaboration. Create a plan. Build review agents. Treat AI as a pair programmer, not a code generator.

I needed to build a Lambda module anyway. InfraHouse maintains over 40 Terraform modules, and some of them deploy Lambda functions with slightly different patterns. We were drowning in duplication. Plus, our enterprise clients need ISO 27001 compliance, which means every Lambda needs proper error monitoring, encryption, and audit trails. Miss one checkbox, and Vanta flags it. AWS Security Hub complains. Auditors ask uncomfortable questions.

So I decided to test the Reddit approach with a real project that mattered.

The result? A production-ready module now live in the Terraform Registry. Comprehensive tests across 18 different configurations. Documentation that actually helps. Two critical security flaws caught before they reached any client. And perhaps most surprisingly-after decades of using Makefiles, I learned new techniques I never knew existed.

This isn’t a story about AI making development faster. It’s about what happens when you combine deep experience with an AI that constantly asks “but what about…?” and “have you considered…?”-and occasionally teaches you something you probably should have known.

Let me show you what three days of true pair programming looks like.


Why Compliance Can’t Wait

The Hidden Cost of Manual Lambda Deployment

Here’s what we consistently see when running AWS Security Hub or Vanta scans: manually deployed Lambdas almost always fail multiple security checks. It’s not that developers are careless-it’s that there’s so much to remember. Encryption. Error monitoring. Log retention. IAM least privilege. Documentation.

Miss any of these, and you’ll see red flags:

  • No error alarms violates ISO 27001 A.12.1.4 (Event Logging)
  • Unencrypted CloudWatch logs fails data protection requirements
  • No alerting strategy - teams find out when customers complain
  • Inconsistent retention - either forever (expensive) or too short (non-compliant)
  • Over-permissive IAM - the classic "Resource": "*"

Each feels minor during development. “We’ll add monitoring later.” Except later never comes-and auditors don’t accept “we’ll fix it soon.”

The InfraHouse Challenge

InfraHouse maintains over 40 open-source Terraform modules, many of which include Lambda components. Over time, natural variation had crept in-different teams solved similar problems in slightly different ways. That variation wasn’t about quality; it reflected evolving customer requirements and AWS features.

Our goal with this project was to consolidate those lessons into a single, standardized module that enforces consistent monitoring, encryption, and alerting across all use cases-so every deployment automatically meets ISO 27001 and Security Hub expectations.

Why This Matters to CTOs

For CTOs and infra leads, these inconsistencies mean:

  • Failed audits delay funding. ISO 27001 or SOC 2 gaps can stall investment rounds.
  • Unmonitored functions cause silent failures. One payment Lambda going dark can cost thousands.
  • Snowflake infrastructure inflates costs. Debugging and onboarding take longer.
  • Technical debt kills velocity. That “temporary” Lambda becomes untouchable six months later.

The obvious solution: a standard, compliant Lambda module. The challenge: making it flexible and strict. That’s where AI pair programming changed the game.


Collaboration Over Generation

The Reddit Revelation

That Reddit post introduced two ideas that reshaped my workflow:

  1. Create a comprehensive development plan first. Not a vague outline-phased checkpoints with definitions of done.
  2. Use specialized review agents. Don’t just generate code-review it through different lenses.

Until then, my AI workflow was reactive: throw a problem at ChatGPT, patch what broke, move on. This new method promised structure.

The Development Plan

On October 31, I spent two hours with Claude Code - Anthropic’s AI coding environment - creating a seven-phase plan covering everything from core structure to ISO 27001 compliance and multi-architecture packaging. That roadmap became persistent context. Every suggestion aligned with the bigger picture.

When AI Challenges Habits

AI collaboration shines when it questions defaults.
I’d always written pytest functions like:

def test_lambda_deploys():
    result = deploy_lambda()
    assert result.status == "success"
    cleanup_lambda()

Claude suggested class-based tests with fixtures:

class TestLambdaDeployment:
    @pytest.fixture(autouse=True)
    def setup_and_teardown(self):
        self.test_id = str(uuid.uuid4())
        yield
        cleanup_lambda(self.test_id)

The result? Cleaner, safer tests. Sometimes the old dog should learn new tricks. I even discovered Makefile’s call function-there since 2002-yet I’d never seen it used. Small wins, big delight.

Security: Catching Each Other’s Blind Spots

My original IAM policy used resources = ["*"] for five EC2 actions. It worked-until Claude flagged it. The AI suggested scoping to subnets via conditions:

condition {
  test     = "StringEquals"
  variable = "ec2:Subnet"
  values = [for subnet_id in var.lambda_subnet_ids :
    "arn:aws:ec2:${region}:${account}:subnet/${subnet_id}"
  ]
}

That’s collaboration at its best: I fix AI’s logic, AI fixes my security.

Where Human Expertise Mattered

Claude struggled with Terraform’s dependency timing in lambda_code.tf. It couldn’t coordinate package.sh, archive_file, and rebuild triggers without race conditions. That required real-world knowledge-how Terraform’s DAG works, when to use depends_on, and why implicit dependencies break in production. Human expertise wasn’t optional; it was essential.

The Documentation Revolution

AI-driven documentation deserves more attention. Every variable received a why along with its what:

variable "alert_strategy" {
  description = "Alert strategy: 'immediate' (any error) or 'threshold' (error rate exceeds threshold)"
  type        = string
  default     = "immediate"

  validation {
    condition     = contains(["immediate", "threshold"], var.alert_strategy)
    error_message = "Must be either 'immediate' or 'threshold'"
  }
}

Every feature came with full, working examples. When monitoring logic changed, docs updated automatically. Traditional workflow: “write docs later.” AI workflow: docs evolve with code. The result was the most complete documentation I’ve ever shipped.

Code Review as a Discipline

The terraform-module-reviewer agent analyzed code from multiple angles:

  • Security: IAM scope and encryption checks
  • Compliance: ISO 27001 and Vanta alignment
  • Best practices: idiomatic Terraform
  • Completeness: full monitoring coverage

The code already worked-but production-ready means more than “it runs.”


Technical Deep Dive

Workflow

Three days, from October 31 till November 2. The key wasn’t speed-it was discipline:

  1. Plan and review before coding
  2. Commit after each completed phase
  3. Keep diffs small (git commit --amend)
  4. Never let AI “run ahead” of understanding
Plan → Commit → Review → Refine → Commit
  ↑                                    ↓
  └────────── Iterate ─────────────────┘

Version control became a conversation log, not just history.

Architecture

The module deploys Lambda functions that are compliant by default yet flexible by design.

module "critical_lambda" {
  source  = "infrahouse/lambda-monitored/aws"
  version = "~> 0.2"

  function_name  = "payment-processor"
  python_version = "python3.12"
  architecture   = "arm64"  # ~20% cheaper than x86_64

  alert_strategy = "immediate"
  alarm_emails   = ["oncall@startup.com"]
  kms_key_id     = aws_kms_key.lambda.id

  lambda_source_dir = "./src"
  environment_variables = {
    ENVIRONMENT = "production"
  }
}

Key Technical Highlights

  • Dual alerting strategies: immediate vs threshold-based
  • Cross-architecture packaging: resolves Terraform race conditions
  • Scoped VPC permissions: subnet-level IAM rules
  • Parameterized tests: 18 configurations covering provider 5.x & 6.x, Python 3.11–3.13, x86 & ARM
@pytest.mark.parametrize("aws_provider", ["~> 5.31", "~> 6.0"])
@pytest.mark.parametrize("architecture", ["x86_64", "arm64"])
@pytest.mark.parametrize("python_version", ["python3.11", "python3.12", "python3.13"])
AspectBeforeAfter
Test configurations318
DocumentationBasic READMEFull examples
Security issuesFound post-deployFixed pre-commit
ComplianceAdded laterBuilt-in
Variable validationMinimal12 rules

“AI didn’t make me faster. It made me better.”


Lessons Learned

It’s Not About Speed

Three days with or without AI-the calendar was identical. The outcome wasn’t. Without AI: familiar code, minimal tests, “README later.” With AI: comprehensive coverage, real documentation, and security fixes before release.

The Quality Multiplier

  • Testing: parameterized across Python versions and architectures.
  • Documentation: complete and synchronized.
  • Security: AI reviewed policies like an auditor.

Where Experience Still Rules

AI offered breadth; experience provided depth. Architectural choices-policy patterns, defaults, supported versions-remained human judgment calls.

The Learning Mindset

Being open to suggestions meant discovering new tools (pytest classes, Makefile call, aws_iam_policy_document). Not every idea was right, but evaluating them sharpened judgment.

The Documentation Shift

AI-maintained documentation changes everything. It’s not a “task”; it’s a natural side effect of collaboration.

Practical Takeaways

For Engineers

  1. Commit after each AI session
  2. Review every suggestion
  3. Let AI handle breadth; you handle depth
  4. Allow docs to evolve automatically

For Leaders

  1. Measure quality, not speed
  2. Pair senior engineers with AI for review
  3. Value testing, docs, and security outcomes
  4. Treat AI as a quality multiplier, not replacement

Scaling Excellence

InfraHouse maintains 48 Terraform modules; each can benefit from this approach:

  • Unified testing across provider versions
  • Consistent docs and compliance
  • Security review before production

Building InfraHouse Skills

We’re developing custom Claude Skills for Terraform and Python:

  • Standards for module structure, pytest, docstrings, error handling
  • Automated review for provider 6.x migrations
  • Security and cost-optimization agents

Each module upgrade becomes an opportunity to improve-not just update.

The Business Impact

For clients: faster delivery, fewer findings, consistent quality. For InfraHouse: lower maintenance, preserved knowledge, stronger differentiation.


For Engineering Leaders

Rethink the Question

Stop asking, “How much faster can AI make us?” Start asking, “How much better can our infrastructure become?”

Ideal First Projects

  1. Compliance-heavy modules
  2. Cross-version support
  3. Security-sensitive code
  4. Projects needing strong documentation

Workflow That Works

# 1. Plan thoroughly
$ create-development-plan.md
# 2. Commit after each phase
$ git commit -m "feat: Phase 1 - Core Structure"
# 3. Review suggestions
$ git diff HEAD~1
# 4. Keep changes manageable
$ git commit --amend

What to Measure

  • Test coverage growth
  • Security issues prevented
  • Documentation completeness
  • Audit pass rate on first run

Investment & ROI

  • Claude Pro ≈ $20 – $100 / dev / month
  • Skills/agents setup ≈ 2–3 days
  • Positive ROI ≈ month 3

Pitfalls to Avoid

  • Treating AI as a generator
  • Skipping reviews
  • Losing context
  • Ignoring docs
  • Failing to capture learnings

The Senior Engineer Advantage

AI doesn’t replace experience-it amplifies it. Senior engineers know when AI is wrong and why.


The New Definition of Senior Engineering

Three Days, Two Perspectives

October 31, 10:30 AM: “I’ll build a Lambda module.” November 2, 6:00 PM: “I’ve built something I couldn’t have imagined.”

Same timeframe. Completely different outcome.

The Module Is Live

Redefining Seniority

After 25 years in tech, this project clarified something important: senior engineering isn’t about knowing everything-it’s about learning continuously and knowing how to evaluate what you learn.

AI pair programming doesn’t diminish expertise; it sharpens it. Every suggestion becomes a decision point: Is this better? Why? What can I learn from it?

The Makefile call function I discovered decades in? That’s not embarrassing-that’s growth. Security patterns I’d missed? Not failure-improvement.

For CTOs and Infrastructure Leaders

The bar just rose. Code that’s compliant, tested, documented, and secure by default is the new standard. Your team isn’t competing with humans alone-it’s competing with humans who collaborate with AI.

For InfraHouse

We’re not chasing speed. We’re building better infrastructure. Each enhanced module feeds new patterns into our agents and skills. The compound effect is powerful.

The Challenge

Take any project that would normally take 3–5 days. Work with AI as a true pair programming partner:

  • Plan thoroughly
  • Commit after each phase
  • Review every suggestion
  • Let AI maintain the docs
  • Use AI review agents for security

Don’t measure time saved-measure quality gained.

The Future

The future isn’t AI replacing engineers. It’s engineers who embrace AI replacing those who don’t.

Not because they’re faster. Because they’re better. Because they keep learning. Because their results speak for themselves.


Let’s Connect

  • Try the module: See production-ready AI-assisted infrastructure.
  • Share your experience: What has AI collaboration taught you?
  • Work with InfraHouse: Let’s raise the standard for cloud infrastructure.

This isn’t a story about automation or hype. It’s about craftsmanship - building something excellent, learning from every tool, and taking pride in the result.

The terraform-aws-lambda-monitored module is just the beginning.

What will you build when quality matters more than speed?

Related Posts

Stop Paying for Mediocre Code Reviews – Build Exceptional Ones Yourself

If you write or review infrastructure code-Terraform, AWS IaC, CI/CD pipelines, automation scripts - you’ve likely felt the pain points in this story. Maybe you’ve tried commercial AI review tools and found them shallow. Maybe your team struggles with inconsistent reviews. Or maybe you’re scaling quickly and need a way to enforce standards without slowing development down.

Read More

From Keycloak to Cognito: Building a Self-Hosted Terraform Registry on AWS

A practical engineering story about replacing Keycloak with Cognito to create a self-hosted Terraform registry using Tapir, AWS ECS, and ALB - a simpler, cost-efficient, and fully reproducible setup.

Read More

Vulnerability Management in CI/CD: Balancing SLAs and Developer Velocity (Part 1: Dependency Scanning with OSV-Scanner)

Part 1 of the Vulnerability Management Series — how to manage dependency vulnerabilities with OSV-Scanner and ih-github while meeting SLAs and keeping developer velocity high.

Read More