Lambda Code Location
Status: Accepted
Date: December 2024
Summary
| Aspect | Location |
|---|---|
| Lambda code | docustack-mono/services/lambdas/ |
| Terraform modules | docustack-infrastructure-modules/modules/ |
| Environment configs | docustack-infrastructure-live/ |
Context
When splitting the monorepo into separate infrastructure repositories, we needed to decide where Lambda function code should live.
Decision
Lambda code stays in the application monorepo, not in the infrastructure modules repository.
Why?
Infrastructure modules define HOW to deploy (Terraform configuration). Application code defines WHAT to deploy (Python Lambda functions). Lambda functions contain business logic, not deployment logic.
This follows the Gruntwork two-repository pattern:
- Catalog repo (
infrastructure-modules): Reusable infrastructure patterns - Live repo (
infrastructure-live): Environment-specific configurations - Application repo (
docustack-mono): Application code including Lambdas
Benefits
1. Independent Development Cycles
- Update Lambda logic without touching infrastructure
- Deploy new Lambda versions without module changes
- Test Lambda code independently
2. Proper Versioning
- Lambda code versioned with application
- Infrastructure modules versioned separately
- Clear separation of concerns
3. Simplified CI/CD
- Lambda tests run with application tests
- Infrastructure changes don't trigger Lambda tests
- Clearer deployment pipelines
4. Team Organization
- Application developers work in monorepo
- Infrastructure engineers work in modules repo
- Clear ownership boundaries
Implementation
Directory Structure
~/development/docustack/
│
├── docustack-mono/ # Application Repository
│ └── services/
│ └── lambdas/ # Lambda code lives here
│ ├── nightly-scheduler/
│ │ ├── stop_resources.py
│ │ ├── start_resources.py
│ │ └── requirements.txt
│ ├── bastion-orchestrator/
│ ├── infra-orchestrator/
│ ├── ip-whitelist-manager/
│ └── db-init/
│
├── docustack-infrastructure-modules/ # Infrastructure Catalog
│ └── modules/
│ ├── nightly-scheduler/ # Terraform module
│ │ ├── lambda-stop.tf # References Lambda via variable
│ │ └── variables.tf # lambda_source_dir variable
│ ├── bastion-orchestrator/
│ └── ...
│
└── docustack-infrastructure-live/ # Environment Configurations
├── _envcommon/
│ └── nightly-scheduler.hcl # Points to monorepo Lambda code
└── dev/us-east-1/
└── nightly-scheduler/
How It Works
1. Terraform Module (Infrastructure Catalog)
The module accepts a variable for the Lambda source directory:
# modules/nightly-scheduler/variables.tf
variable "lambda_source_dir" {
description = "Absolute path to the Lambda source directory"
type = string
}
# modules/nightly-scheduler/lambda-stop.tf
data "archive_file" "lambda_stop" {
type = "zip"
source_file = "${var.lambda_source_dir}/stop_resources.py"
output_path = "${path.module}/lambda_stop.zip"
}
2. Environment Configuration (Live Repo)
The live repo points to the Lambda code in the monorepo:
# _envcommon/nightly-scheduler.hcl
inputs = {
lambda_source_dir = "${dirname(find_in_parent_folders("root.hcl"))}/../docustack-mono/services/lambdas/nightly-scheduler"
}
3. Deployment Flow
1. Developer updates Lambda code in docustack-mono/services/lambdas/
2. Terragrunt references that code via lambda_source_dir variable
3. Terraform packages the Lambda code into a ZIP
4. Terraform deploys the Lambda function to AWS
Lambda Functions Reference
| Lambda Function | Code Location | Terraform Module | Purpose |
|---|---|---|---|
| nightly-scheduler | services/lambdas/nightly-scheduler/ | nightly-scheduler | Cost-saving scheduler |
| bastion-orchestrator | services/lambdas/bastion-orchestrator/ | bastion-orchestrator | Bastion lifecycle |
| infra-orchestrator | services/lambdas/infra-orchestrator/ | infra-orchestrator | Infrastructure orchestration |
| ip-whitelist-manager | services/lambdas/ip-whitelist-manager/ | ip-whitelist | IP whitelist management |
| db-init | services/lambdas/db-init/ | db-init-lambda | Database initialization |
Development Workflow
Updating Lambda Code
# 1. Edit Lambda code in monorepo
vim docustack-mono/services/lambdas/nightly-scheduler/stop_resources.py
# 2. Test locally (if applicable)
cd docustack-mono/services/lambdas/nightly-scheduler
python -m pytest
# 3. Deploy via Terragrunt
cd docustack-infrastructure-live/dev/us-east-1/nightly-scheduler
terragrunt apply
Terraform automatically detects the code change, packages it, and updates the Lambda.
Updating Infrastructure Module
# 1. Edit module
vim docustack-infrastructure-modules/modules/nightly-scheduler/lambda-stop.tf
# 2. Test locally with source override
cd docustack-infrastructure-live/dev/us-east-1/nightly-scheduler
terragrunt plan --terragrunt-source ~/development/docustack/docustack-infrastructure-modules/modules/nightly-scheduler
# 3. Commit and tag new module version
cd docustack-infrastructure-modules
git commit -am "feat(nightly-scheduler): add retry logic"
git tag v1.2.0
git push --tags
# 4. Update version in live repo and deploy
FAQ
Q: Why not put Lambda code in the modules repo?
Lambda code is application logic, not infrastructure configuration. Mixing them would:
- Require module version bumps for every Lambda code change
- Complicate testing (infrastructure tests vs application tests)
- Violate separation of concerns
- Make it harder for application developers to update Lambda code
Q: What about Lambda layers or shared dependencies?
- Lambda layers: Create a separate module in
infrastructure-modules - Shared Python code: Use a shared package in
docustack-mono/packages/ - Dependencies: Manage via
requirements.txtin each Lambda directory
Q: How do I test Lambda changes before deploying?
- Unit tests in the monorepo (test Lambda logic)
- Local testing with SAM or LocalStack
- Deploy to dev environment first
- Promote to staging/prod after validation
Consequences
Positive
- Clear separation between infrastructure and application code
- Independent versioning and release cycles
- Simpler CI/CD pipelines
- Better team organization
Negative
- Requires coordination between repos for some changes
- Path references between repos can be fragile
- Developers need to understand the multi-repo structure