Testing vs user experienced performance. Lessons learnt from production.
Learn how performance changes once you start looking at your system as an experience rather than set of endpoints to test.
What would you do if you were asked to improved performance of web application? Probably start with devising performance test strategy, implement tests, and gather results… there is other option.
Performance testing is hard. Not only we are tasked with creating a model of how our application is going to be used, we also need to simulate how it’s going to grow, and how the growth may affect its performance.
Selecting proper metric is also a challenge. Should we measure response time of a single request?
What if there are multiple requests that need to complete in order for user to achieve something in our app? And should we focus on backend only? A lot is happening in frontend.
So what should be the metric? I’m going to share with you my experience regarding performance testing. Starting from decision on metric. I will discuss what are the metrics you can choose from, what is APDEX, and why you will end up most likely implementing something custom, working for your app.
Question whether you need to go with full blown performance testing if you happen to work in continuous deployment, and how you can leverage access to real time metrics, observability, switching changes with feature flags, to experiment with performance improvements. What about regressions? Functional and performance.
Have you heard about consistency checks? What if I told you can release every single change requiring refactor of old code safely and without a hassle? And rely on anomaly detection systems to spot regressions in performance.
Improving performance is not an easy goal, but with proper tools and approach, you can iteratively improve things. Starting with most important to your users.
30-minute New Voice Talk
Full-Day Tutorial (6 hours)