How a performance crisis with our Rails choir management application led us to migrate from Heroku buildpacks to
Docker, achieving dramatic memory savings and deployment improvements.
Our Rails choir management application was slowly strangling itself. What started as occasional memory warnings had
escalated into a full-blown performance crisis. The 512MB Heroku dyno was consistently hitting 100%+ memory utilization,
causing:
- Frequent R14 errors (Memory quota exceeded)
- Slow response times during peak usage
- Unpredictable crashes during high-traffic periods
- Rising hosting costs as we considered upgrading to larger dynos
The application itself wasn’t particularly complex—Ruby 3.2.7, Rails 7.1, PostgreSQL database, and modern asset pipeline
with Bootstrap 5. But something about the Heroku buildpack environment was consuming far more resources than expected.
When we started investigating alternatives, Docker containerization emerged as the most promising solution. The question
was: would the migration effort justify the potential gains?
The Business Case for Migration
Cost Analysis
Current State (Heroku Buildpack):
- 512MB dyno: $25/month
- Memory utilization: 100%+ (frequently exceeding limits)
- Next tier (1GB dyno): $50/month
- Projected annual cost: $600+ (with required upgrade)
Projected State (Docker):
- Same 512MB dyno: $25/month
- Estimated memory utilization: 40-50%
- Room for growth without hardware upgrades
- Projected annual savings: $300+ compared to dyno upgrade
The application needed to handle:
- Daily active users: 10-40 choir members
- Peak concurrent sessions: 5-20 during practice scheduling
- Asset-heavy pages: DataTables with member listings, music libraries
- File uploads: Sheet music PDFs, MP3s, MIDI files
- Database operations: Complex queries for scheduling conflicts
While not massive scale, the performance issues were impacting user experience during critical scheduling windows.
Technical Debt Concerns
The buildpack environment was accumulating technical debt:
- Dependency conflicts with system packages
- Inconsistent builds across development and production
- Limited control over runtime environment
- Black box deployments making debugging difficult
Why Docker Made Sense
Principle 1: Resource Efficiency
Heroku buildpacks include broad compatibility layers and development tools that aren’t needed in production. Docker
allows us to create lean, purpose-built containers with only essential dependencies.
Buildpack overhead includes:
- Multiple Ruby versions for compatibility
- Development tools and headers
- Unused system libraries
- Caching mechanisms for diverse use cases
Docker approach:
- Exact Ruby 3.2.7 runtime
- Production-only system packages
- Optimized gem bundle (no dev/test gems)
- Minimal base image with required dependencies
Principle 2: Build Reproducibility
The “works on my machine” problem was real. Different developers had different local environments, and deployment
behavior was sometimes unpredictable.
Docker benefits:
- Identical environments across development, staging, production
- Version-controlled infrastructure as code
- Reproducible builds regardless of host system
- Consistent dependency resolution
With Docker, we could optimize the entire stack for our specific use case rather than accepting buildpack defaults.
Optimization opportunities:
- Custom asset compilation pipeline
- Optimized gem installation and bundling
- Tailored system package selection
- Container-specific memory management
Migration Strategy Overview
Rather than attempting a big-bang migration, we developed a phased approach:
Phase 1: Foundation
- Create base Docker image with system dependencies
- Establish development Docker workflow
- Implement multi-stage build architecture
Phase 2: Development Parity
- Docker Compose setup for local development
- Volume mounting for live code reloading
- Database containerization with PostgreSQL
Phase 3: Production Optimization
- Lean production Dockerfile
- Asset pipeline optimization
- Memory usage profiling and tuning
Phase 4: Deployment Integration
- Heroku Container Registry setup
- GitHub Actions CI/CD pipeline
- Performance monitoring and validation
Early Wins and Validation
Even before completing the full migration, early testing revealed promising results:
Memory Usage Improvements
Development Environment Comparison:
- Buildpack development: ~200MB baseline memory
- Docker development: ~120MB baseline memory
- 40% reduction in development environment
Build Consistency
Before (Buildpack):
- Occasional build failures due to dependency conflicts
- Inconsistent asset compilation across environments
- “Works locally but fails in staging” issues
After (Docker):
- 100% reproducible builds
- Consistent asset pipeline behavior
- Identical behavior across all environments
Development Workflow
Docker Compose Benefits:
- One-command environment setup (
docker-compose up
)
- Isolated database with consistent data
- Hot reloading for both Rails and assets
- Easy onboarding for new team members
The Technical Challenges Ahead
While the business case was compelling, the migration revealed several technical hurdles that needed systematic
solutions:
Challenge 1: Asset Pipeline Complexity
Modern Rails applications with Propshaft, SASS compilation, and JavaScript bundling create complex dependency chains
that needed careful orchestration in Docker.
Challenge 2: System Package Conflicts
Node.js 20 and Ruby 3.2.7 had conflicting system dependencies, particularly around JavaScript runtime libraries like
libv8-dev.
Challenge 3: Production vs Development Dependencies
Understanding which build tools needed to be available in production versus development required rethinking package.json
organization.
Challenge 4: Container Architecture
Designing an efficient multi-stage build process that maximized caching while minimizing image size.
After completing the migration, the performance improvements exceeded expectations:
Memory Usage
- Before: 100%+ of 512MB dyno (frequent R14 errors)
- After: ~40% of 512MB dyno (stable, room for growth)
- Improvement: 60% memory reduction
Deployment Reliability
- Before: Occasional build failures, inconsistent behavior
- After: 100% reproducible builds, predictable deployments
- Improvement: Zero deployment-related incidents since migration
Development Velocity
- Before: Environment setup took hours, frequent “works locally” issues
- After: One-command setup, identical environments
- Improvement: New developer onboarding reduced from days to hours
Cost Impact
- Avoided: $300+ annual increase from upgrading to larger dynos
- Gained: Headroom for traffic growth without hardware changes
- ROI: Migration effort paid back within 2 months of cost savings
What’s Next in This Series
This migration involved solving numerous technical challenges that other Rails developers will likely encounter. The
upcoming posts in this series will dive deep into each solution:
Each post includes working code examples, troubleshooting guides, and the specific solutions that transformed our
deployment pipeline.
Key Takeaways
The migration from Heroku buildpacks to Docker wasn’t just about performance—it was about gaining control over our
deployment pipeline and creating a foundation for sustainable growth.
For teams considering similar migrations:
- Start with the business case - Performance problems and cost pressures create clear justification
- Plan for complexity - Modern Rails applications have intricate dependencies that require systematic solutions
- Invest in development parity - Docker’s biggest win is eliminating environment discrepancies
- Measure everything - Baseline current performance to validate improvements
The bottom line: If your Rails application is hitting resource limits or deployment reliability issues, Docker
containerization offers a path to both immediate performance gains and long-term operational improvements.
Ready to dive into the technical implementation? The next post covers our multi-stage Docker architecture and how we
structured the containers for maximum efficiency.
This is Part 1 of our Rails Docker Migration series. Subscribe to get notified when new posts are published, or
jump ahead to the technical deep-dives if you’re ready to start implementing.