# came from LinkedIn → short version: 10+ yrs · Python · K8s · AWS · $1.25M saved
(click to dismiss)
// senior engineer
Juan Velasquez
Backend systems that process at scale, survive incidents, and cost less over time.
~/whoami
$ savings --annual
$1.25M confirmed
$ experience --years
10+ yrs backend
$ stack --primary
python · k8s · aws
$ status
open to remote roles
// work
Distributed systems & backend
Replaced a manual, engineer-babysitted benchmark system with an automated experiment-driven platform — teams now ship against a performance signal they can trust
Designed and own Cloud Log Processor, the pipeline that makes benchmark results flow at experiment scale — SQS/SNS-based, full Datadog observability, zero data loss on failures
gRPC migration removed a hard ceiling on concurrent benchmark runs and shed ~100 replicas at peak — the system now scales without proportional infra spend
Infrastructure & cost
Identified a compounding reliability problem in the customer upgrade pipeline, quantified the business impact for management — 4x volume growth, declining success rate, doubling support takeovers — designed the migration plan, then embedded with the team to lead execution
Led migration of ~25 services from EC2/Spinnaker to Kubernetes — $1.25M/year in recurring AWS savings and a platform engineers can deploy to without SSH-ing into boxes
Designed event-driven autoscaling with KEDA on EKS — capacity now tracks actual workload, not someone's conservative guess at peak
Reliability & ownership
Decomposed a 30k LOC Python monolith into tested, isolated modules — made the platform safe to evolve and unblocked parallel development on a codebase that used to require a single person to hold context for
Caused and resolved a SEV-2 production incident after a KEDA autoscaling deployment starved neighboring services across cluster6-prd — diagnosed and contained within hours, authored the postmortem, and redesigned subsequent rollouts to account for cluster-wide resource contention
Removed the bottleneck slowing every data scientist on the platform — ~45% faster pipeline, directly compressing the cycle from question to answer