Running Large Scale Tests
BlazeMeter gives you all the resources you need to execute large scale tests 'on the fly'.
This article will show you the steps you should take to run a load test of more than 50K concurrent users.
Quick Overview of the Steps
Quick Overview of the Steps
- Write your script
- Test it locally with JMeter
- Run BlazeMeter SandBox Testing
- Setup the amount of Users-per-Engine using 1 Console & 1 Engine
- Setup and test your Cluster (1 Console & 10-14 Engines)
- Use the Multi Test feature to reach your max CC goal
Step 1 : Write Your Script
There are various ways to get your script:
- Use the BlazeMeter Chrome Extension to record your scenario
- Use the JMeter HTTP(S) Test Script Recorder. This sets up a proxy so you could run your test through and record everything
- Go manually all-the-way and construct everything from scratch. This is more common for functionality/QA tests
- You'll need to change certain parameters, such as Username & Password, or you might want to set a CSV file with those values so each user can be unique.
- You might need to extract elements such as as Token-String, Form-Build-Id and others using Regular Expressions, JSON Path Extractor, XPath Extractor. This will enable you to complete requests like "AddToCart", "Login" and more...See this article regarding these procedures
- You should keep your script parameterized and use configuration elements like HTTP Requests Defaults to make your life easier when switching between environments.
Step 2 : Testing Locally With JMeter
Start debugging your script with one
thread, one iteration, and using the View Results Tree element, Debug
Sampler and Dummy Sampler. Keep the Log Viewer open in case any JMeter
errors are reported.
Go over the True and False responses of all the scenarios to make sure the script is performing as you expected.
After the script has run successfully using one thread, raise it to 0-20 threads for ten minutes and check:
Go over the True and False responses of all the scenarios to make sure the script is performing as you expected.
After the script has run successfully using one thread, raise it to 0-20 threads for ten minutes and check:
- Are the users coming up as unique (if this was your intention)?
- Are you getting any errors?
- If you're running a registration process, take look at your backend - are the accounts created according to your template? Are they unique?
- Test statistics on the summary report - do they make sense (in terms of average response time, errors, hits/s)?
- Clean it up by removing any Debug/Dummy Samplers and deleting your script listeners
- If you use the Listeners (such as "Save Responses to a file"), make sure you don't use any Paths! If it's a Listener or a CSV Data Set Config make sure you don't use the path you've used locally, use only the filename instead (as if it was on the same folder as your script)
- If you're using your own proprietary JAR file(s), upload them.
- If you're using more than one Thread Group (or not the default one) - set the values before uploading them to BlazeMeter.
Step 3 : BlazeMeter SandBox Testing
If that's your first test, take a look at this article on how to create tests in BlazeMeter.
The Sandbox is actually any test which has up to 20 users, uses only the console, and runs for up to 20 minutes.
The Sandbox configuration allows you to test your script, backend and ensure everything works well.
Here are some common issues you might come across:
A SandBox configuration can be:
You should also check the Monitoring Report to see how much memory & CPU was used. This should help you with step four.
The Sandbox is actually any test which has up to 20 users, uses only the console, and runs for up to 20 minutes.
The Sandbox configuration allows you to test your script, backend and ensure everything works well.
Here are some common issues you might come across:
- Firewall - make sure your environment is open to the BlazeMeter CIDR list and whitelist them
- Make sure all of your test files e.g: CSVs, JAR, JSON, User.properties etc. are present
- Make sure you didn't use any paths
A SandBox configuration can be:
- Engines: Console only (1 console , 0 engines)
- Threads: 1 -20
- Ramp-up: 300-1200 minutes
- Iteration: Test continues forever
- Duration: 10-20 minutes
You should also check the Monitoring Report to see how much memory & CPU was used. This should help you with step four.
Step 4 : Setup The Amount Of Users-Per-Engine Using 1 Console & 1 Engine
Now that we're sure the script runs flawlessly in BlazeMeter, we need to figure out how many users we can apply to one engine.
Here is a way to figure this out without looking back on the SandBox test data.
Set your test configuration to:
If your engine didn't reach either a 75% CPU utilization or 85% memory usage (one time peaks can be ignored) :
For a Multi Location test (CDN test) which runs engines on all of our cloud providers, it's recommended to ensure that every load engine, regardless of the cloud provider, can sustain the load and bring about the desired outcome.
Here is a way to figure this out without looking back on the SandBox test data.
Set your test configuration to:
- Number of threads: 500
- Ramp-up: 40 minutes
- Iteration: forever
- Duration: 50 minutes
- Use 1 console and 1 engine.
If your engine didn't reach either a 75% CPU utilization or 85% memory usage (one time peaks can be ignored) :
- Change the amount of threads to 700 and run the test again
- Raise the amount of threads until you get to 1,000 threads or 60% CPU
- Look at when your test first got to 75% - and see how many users you had at this point.
- Run the test again. This time, instead of putting a ramp-up of 500, enter the amount of users you got from the previous test
- Set the ramp-up time you want for the real test (5-15 minutes is a great start) and set the duration to 50 minutes.
- Make sure you don't go over 75% CPU or 85% memory usage throughout the test
Multi Location Tests
BlazeMeter uses a variety of cloud providers which have different types of machines and network infrastructures.For a Multi Location test (CDN test) which runs engines on all of our cloud providers, it's recommended to ensure that every load engine, regardless of the cloud provider, can sustain the load and bring about the desired outcome.
Step 5 : Setup and Test Your Cluster
We now know how many threads we
can get from one engine. At the end of this step, we'll know the amount
of users one Cluster (test) can get us.
A Cluster is logical container which has one console and 0-14 engines.
Even though you can create a test with more than 14 engines, it actually creates two clusters and clones your test.
The maximum number of 14 engines per console is based on BlazeMeter’s own testing to guarantee that the console can handle the pressure of 14 engine. This creates a lot of data to process.
So, at this step, we'll take the test from step four, and change only the number of engines by raising them to 14.
Run the test for the full length of your final test. While the test is running, go to the Monitoring Report and:
By the end of this step, you should know:
A Cluster is logical container which has one console and 0-14 engines.
Even though you can create a test with more than 14 engines, it actually creates two clusters and clones your test.
The maximum number of 14 engines per console is based on BlazeMeter’s own testing to guarantee that the console can handle the pressure of 14 engine. This creates a lot of data to process.
So, at this step, we'll take the test from step four, and change only the number of engines by raising them to 14.
Run the test for the full length of your final test. While the test is running, go to the Monitoring Report and:
- Verify that none of the engines pass the 75% CPU, 85% Memory limit
- Locate your console label (to find it, go to the Logs Report -> console’s log and look for its private IP). This should not reach the 75% CPU or 85% Memory limit.
By the end of this step, you should know:
- The Users per Cluster you'll have
- The Hits/s per Cluster you'll reach
Step 6 : Use the 'Multi-Test' feature to Reach Your Maximum CC Goal
We've got to the final stage!
We know the script is working, we know how many users one engine can sustain, and we know how many users we can get from one Cluster.
Let’s assume these values:
We could go with 8 clusters of 12 engines (48K) and one cluster with 4 engines (the other 2K) - but it's better to spread the load like this:
Instead of 12 engines per cluster, we'll use 10. Therefore, we'll get 10*500 = 5K from each cluster and we'll need 10 clusters to reach 50K.
This helps us as we:
We know the script is working, we know how many users one engine can sustain, and we know how many users we can get from one Cluster.
Let’s assume these values:
- One engine can have 500 users
- The cluster will have 12 engines
- We aim to test for 50K users
We could go with 8 clusters of 12 engines (48K) and one cluster with 4 engines (the other 2K) - but it's better to spread the load like this:
Instead of 12 engines per cluster, we'll use 10. Therefore, we'll get 10*500 = 5K from each cluster and we'll need 10 clusters to reach 50K.
This helps us as we:
- Don't have to maintain maintain two different test types
- We can grow by 5K by simply adding the same cluster\test again to the 'Multi Test' configuration (5K is much more common than 6K)
- Select 'Add Multi-Test' (See image no. 1 below)
- Add the test that you verified in step five. You can add it by using the Drag-and-Drop feature as many times as you need - it will simply duplicate itself (see image no. 2 below)
- You can change the configuration of each test to load from a different region, have a different script/csv/other file, use a different network emulation, different parameters etc.
- Your Multi-Test for 50K users is ready to go. Press start on the Multi-Test to launch all tests with 5K users from each one.
- The aggregated report of your Multi Test will start generating results in a few minutes. You can also see the results of each individual test by simply opening it's report.
Setup A Multi-Test
Running The Multi-Test
1. After saving the test and pressing the 'play' button, you'll be notified that you're about to launch a new load.
2. You'll also be given the option to select 'Synchronized Start' to ensure that all the servers are up before actually starting the test. This option is useful if you're concerned that some servers or locations are significantly slower than others and you want to synchronize them.
3. After pressing 'Launch Servers', you'll be shown the 'Booting' Window. This shows you the progression of launching the load engines and consoles across the entire Multi Test.
Thats it! Your Multi-Test is up and running!
If you'd like to know how to analyse the Multi Test Report, please check out this article
2. You'll also be given the option to select 'Synchronized Start' to ensure that all the servers are up before actually starting the test. This option is useful if you're concerned that some servers or locations are significantly slower than others and you want to synchronize them.
3. After pressing 'Launch Servers', you'll be shown the 'Booting' Window. This shows you the progression of launching the load engines and consoles across the entire Multi Test.
Thats it! Your Multi-Test is up and running!
If you'd like to know how to analyse the Multi Test Report, please check out this article
No comments:
Post a Comment