The organization was operating on a legacy monolithic application that could not scale to support real-time AI model deployment. The system had tight coupling between services, making it impossible to independently scale components or deploy updates without risking full system downtime.
Engineering teams, DevOps engineers, business stakeholders, and executive leadership who needed reliable, scalable infrastructure to support AI-powered features.
Led a phased enterprise migration from monolithic architecture to microservices on AWS. Defined a 4-phase delivery roadmap, established go-live criteria, and coordinated onshore and offshore teams using Jira and Confluence throughout the full project lifecycle.
Project Manager responsible for full lifecycle ownership — scoping, scheduling, budgeting, risk mitigation, stakeholder communication, and retrospective reviews.
Future state includes deploying ML model serving infrastructure on AWS SageMaker, enabling real-time AI inference at scale with automated retraining pipelines.