Years of professional experience
Performance Engineer Lead with 10+ years of experience driving performance testing strategies, building high-performing QA teams, and delivering scalable, high-quality systems in cloudnative and distributed environments. Expert in balancing deep technical expertise (JMeter, Dynatrace, Kafka, AWS) with strong stakeholder management and leadership, ensuring alignment between business objectives and performance goals. Experienced in mentoring offshore teams, leading Agile delivery, and translating complex technical insights into clear business language to drive informed decision-making.
Years of professional experience
• Led the central performance testing function for large-scale enterprise initiatives, ensuring performance goals are aligned with business and technical stakeholders.
• Drive onboarding of new projects into the performance pipeline, assessing needs and defining test strategies.
• Partner with business, product, and technical teams to ensure performance objectives are built into delivery roadmaps.
• Conducted root cause analysis of performance bottlenecks, implemented optimization solutions, and monitored key operational metrics. Analyzed logs and system data to identify anomalies, inefficiencies, and potential issues.
• Manage and mentor an offshore performance testing team, ensuring productivity, quality, and knowledge transfer. Defined Agile/Kanban workflows, managed Jira tasks, test planning, and performed the role of Scrum Master to drive team collaboration and delivery.
• Implemented best practices in Jira and Confluence, streamlining performance testing documentation, progress tracking, and reporting.
• Delivered mentoring and knowledge-sharing sessions on performance engineering and monitoring, enabling teams to adopt best practices while influencing Product Owners to embed performance as a critical success factor in delivery.
• Implemented Grafana dashboards and alerts for KPIs, JVM, and Kafka metrics; used Dynatrace for deep-dive issue analysis.
• Performed capacity planning and scalability analysis to ensure system reliability under high load.
• Supported shift-right testing and continuous performance validation in production-like environments.
• Drove performance readiness for quarterly releases and End-of-Year peak events by executing disaster recovery and high-load testing, ensuring optimal system stability during critical business launches.
• Team Leadership & End-to-End Testing: Led a team managing performance testing from strategy to execution across web, mobile, and cloud-native platforms.
• Cloud Migration & Scalability: Optimized AWS environments (SQS, ECS, RDS, IoT Thing, Lambda) with capacity planning, load forecasting, and cost optimization for robust, scalable infrastructure.
• Collaboration & Bottleneck Resolution: Partnered with developers, architects, and DevOps to identify and resolve performance bottlenecks, ensuring high availability and low latency.
• CI/CD Integration & Shift-Left/Right Testing: Integrated performance tests into Jenkins pipelines using Taurus and BlazeMeter; implemented shift-left/right testing for continuous performance validation.
• Application Coverage: Conducted end-to-end performance testing of mobile apps, APIs, message queues (ActiveMQ, IBM MQ), and UI applications (B2B, B2C, portals).
• Mobile App Performance: Performed automated performance testing using NeoLoad and JMeter, simulating real-world load to improve responsiveness and stability.
• Defined performance objectives with stakeholders and ensured alignment with business requirements.
• Designed and executed web and mobile tests (Perfecto), simulating realistic user load and interactions.
• Analyzed results, identified bottlenecks, and recommended optimizations for stability, scalability, and responsiveness.
• Validated overall application reliability under varying load conditions.
• Created and enhanced LoadRunner scripts (VuGen) with parameterization, correlation, and real-world scenario simulation.
• Executed performance tests using HP Performance Centre, defining ramp-up/ramp-down of virtual users and preparing test suites.
• Analysed results using metrics like CPU, memory, disk, throughput, response times, web/database monitors, and heap/Garbage Collection behaviour.
• Identified performance bottlenecks, memory leaks, and coordinated with development teams for issue resolution.
• Conducted various tests: load, stress, spike, endurance, regression, mobile, batch, and resilience/recoverability testing.
• Generated performance reports and graphs to provide actionable insights and recommendations.
• Experienced in performance testing of PEGA and cloud-based applications.
Performance Engineering
Loadrunner
Jmeter
Neoload
Performance center
AWS
SQL
Test result analysis
Memory management
Troubleshooting
Performance reporting
Load balancing
Capacity planning
Load testing
Performance benchmarking
Performance tuning
Stress testing
System analysis
Test automation
[Software] expertise
Concurrency control
Database analysis
CPU utilization
Network analysis
Scalability testing
Performance improvement recommendations
Latency measurement
Code optimization
AWS cloudwatch
Performance test tools
Volume testing
Resource profiling
Spike testing
Endurance testing
Test automation scripting
Performance test planning
Problem-solving
Reliability
Team collaboration
Quality assurance
Performance monitoring
Azure Monitoring
Docker
Kafka