Summary: This article examines the impact of two I/O- and CPU-intensive processes on server performance in several test scenarios as a model for your own evaluation process. It describes the benchmarking tools used and the conclusions reached.
When asked how he conquered the world, Alexander the Great replied, "By not delaying." If you want your Domino server to conquer the world, you probably want to reduce delays as much as possible. However, you may be asking yourself, "What can I do to reduce server delays and optimize server performance?"
To optimize your server performance, you need to understand server performance analysis. Server performance analysis involves testing how different factors affect the performance of a server and drawing conclusions about the results. You can then use this information to improve server performance in your environment. This article describes performance analysis by showing you how we do it here at Lotus/Iris. It introduces you to the tools we use and gives you an in-depth look at two of our performance analyses and their results. The first test shows the impact of port encryption on server performance. The second test shows how the NSF_BUFFER_POOL_SIZE setting in the NOTES.INI file affects server performance, through an analysis of several different test scenarios.
Understanding the analysis for each test is important because a recommendation that significantly enhances server performance for one server may not always have the same impact on another server running in a slightly different environment with a different configuration. You should apply all the test results from this article to your own configuration as a guideline for understanding Domino Server performance and how to configure your Domino Server system. Even if we observed a certain level of performance at a given configuration, a performance improvement is not always guaranteed when doubling or halving a specification (unfortunately). That's what performance evaluations are all about and why we perform so many different tests! The test data and methodology that supports the recommendations in this article can help you decide what might work in your environment. It may also help you plan how you want to set up your environment in the future.
To read more recommendations for improving server performance, see the sidebar, "Top 10 ways to improve your server performance."
Benchmarking tools
Here at Lotus/Iris we do our own server performance testing, and we put a lot of effort into relating benchmarks to customer needs. A benchmark is a standard by which the performance of hardware or software is measured. Our benchmark tests typically measure the efficiency and the speed at which the Domino server performs a certain task. You can use the results of our tests to help you with capacity planning and for optimizing the servers in your own environment.
We conduct our tests by using the following tools:
- PerfMon on Windows NT
- SAR, VMStat, IOStat, NetStat, and PerfMeter, on UNIX
- Pulse, MPCPUMON, WSTUNE, BonAmi's "CPU Monitor Plus," COL Systems, "Osrm2 Lite Performance Monitor Utility," and Performance 3.0+ by Clear & Simple, on OS/2
- Performance Tuning Redbook, on AIX
We also use Domino tools, such as:
In addition, we use an internally developed tool called NotesBench, which is a benchmarking tool that allows vendors and other organizations to test how many users and transactions a particular hardware configuration can support. NotesBench simulates the behavior of Domino workstation-to-server or server-to-server operations. It returns measurements that let you evaluate server performance in relation to the server system's cost.
The NotesBench Consortium is an independent, non-profit organization dedicated to providing Domino and Notes performance information to customers. It requires each member to run the NotesBench tests in the same manner and allows to tests to be audited. You can visit the NotesBench Web site to view published data and test results.
Back to top
An in-depth test of port encryption on server performance
Now we will take you through some examples of server performance tests we conduct right here at Iris. In the first test, we started with the hypothesis that implementing port encryption would affect server performance. We set out to see if this was true by defining the functionality being tested, outlining our test methodology and test data, and summarizing what the findings of the tests would mean to users. By reading the following test data, you can see how we arrived at our results and evaluate whether or not the findings apply to your environment.
Back to top
What is port encryption?
You can encrypt network data on specific ports to prevent network eavesdropping with a network protocol analyzer. Network encryption occurs at the network transfer layer of a selected protocol. Network data is encrypted only while it is in transit. Once the data has been received, network encryption is no longer in effect.
Network data encryption occurs if you enable the encryption on either side of a network connection. For example, if you enable network data encryption on a TCP/IP port on a server, you don't need to enable encryption on the TCP/IP ports on workstations or servers that connect to the server.
Multiple, high-speed encrypted connections to a server can affect server performance. Encrypting network data has little effect on client performance. We wanted to determine the impact of TCP/IP port encryption on a server, and see whether the impact varied with different user loads. With this as our performance objective, we set up a series of tests.
Back to top
Test methodology and test data
To run the tests, we set up a server with the following configuration:
- CPUs: Three 200Mhz Pentium Pro; 512K L2 cache
- Hard Drives: Ultra Wide SCSI II (Adaptec), 9 Disk (4GB)
- Raid Level: RAID0
- Network: 100MBit Ethernet
- Memory: 1.5GB
- OS: Windows NT 4.0
- Domino: Release 4.6a
We then initiated active user loads of 500, 800, and 1,000 users on the server to assess whether or not port encryption affected CPU utilization. We conducted each load test twice to determine whether the same results would occur. This tested their validity. We modeled each user after the standard NotesBench Mail & Discussion database workload with the following changes:
- Messages were 10K versus the NotesBench standard of 1K.
- All messages were delivered to local addresses, which tested same server delivery.
- Users read the messages delivered during the test, which increased view refresh activity.
We generated and delivered an average of 70,000 messages during each user load point. A total of 2,500 person entries and mail databases were on the system. To see the results of these tests, see the sidebar, "Port Encryption Test Results." For our conclusion, see the following section "What did we find out?"
Back to top
What did we find out?
In testing port encryption versus no port encryption, the relative CPU utilization was in the 5-10% range for 500, 800, and 1,000 active users. User response times were less than 0.20 seconds for all three active user loads. Peak CPU utilization was in the greater than 90-second range for initial user connections to the server.
In the less than 60% CPU utilization range, what we learned based on these tests is that:
- There's an incremental impact of port encryption on system utilization.
- There's an incremental impact of port encryption on end-user response time.
What this data does not tell us is whether the impacts hold true at a different CPU utilization range (such as, at 85% or greater).
The results of these tests led us to the conclusion that using port encryption increased CPU utilization by 5-10%. This means that you can turn on port encryption without worrying about a performance impact for your users' response time when the overall system utilization is in the less than 60% range.
Back to top
An in-depth test of the NSF_BUFFER_POOL_SIZE setting on server performance
This section brings you through another series of in-depth tests we performed here at Iris. We wanted to determine whether or not using the NSF_BUFFER_POOL_SIZE setting in the server's NOTES.INI file would improve system utilization and response time. This information is important for capacity planning. We also wanted to see how the system used memory when we didn't use the NSF_BUFFER_POOL_SIZE setting and had Domino allocate space, as compared to specifying a quarter of available memory, which is the minimum requirement and is stressed as the user count grows, and specifying half of available memory, which may not be completely utilized. This is important information for you, as an administrator, because you control the buffer pool size specification and you can decide what size to specify.
Note: While we found improved performance when specifying a quarter of available memory (which was approximately equal to leaving the Domino Server at the default setting), specifying half of available memory, within the configurations evaluated for this paper, showed no performance gains. Also, observations about performance behavior experienced at a particular memory configuration may or may not apply to systems at a higher RAM configuration.
In this second in-depth test, we actually ran three test case scenarios to find out the effect of the NSF_BUFFER_POOL_SIZE setting on memory utilization, system performance, and CPU utilization. In the first scenario, we started with the hypothesis that setting the NSF_BUFFER_POOL_SIZE specification to a quarter of available memory, within the test system configuration, would yield the best system response time and lowest processor utilization. (Test Case #1). Our second hypothesis was that specifying the buffer pool size affects memory utilization differently at various user loads (Test Case #2).
Our third hypothesis was that the time taken to rebuild views in a database is independent of the memory available and number of CPUs (Test Case #3). This would mean that purchasing more memory wouldn't help with the rebuilding time. Also, when rebuilding views, the time it takes to rebuild the views for two databases with the same size, is about double the time it takes for one database. Therefore, the Domino internals are working as efficiently as possible in handling multiple tasks at the same time.
We set out to see if these hypotheses were true by defining the functionality being tested, outlining our test methodology and test data, and summarizing what the findings of the tests would mean to users. By reading the following test data, you can see how we arrived at our results and evaluate whether or not the findings apply to your environment.
Back to top
What is the NSF_BUFFER_POOL_SIZE setting?
Indexing a database can be a time-consuming and resource-intensive activity. To control this process, you can use the NSF_BUFFER_POOL_SIZE setting in the NOTES.INI file to specify the maximum size (in bytes) used for indexing. This setting controls the size of the NSF buffer pool, a section of memory dedicated to buffering I/O transfers between the NIF indexing functions and disk storage. The number of users, size and number of views, and number of databases all affect how you should set the buffer pool specification.
Using the NSF_BUFFER_POOL_SIZE setting can affect your server's response time and system resource utilization. Therefore, we wanted to determine the impact of both setting and not setting the NSF_BUFFER_POOL_SIZE on server performance.
Back to top
Test methodology and test data
To run all the test cases, we set up a server with the following specifications:
- CPUs: Four 200Mhz Pentium Pro; 512 L2 cache
- Hard Drives: Ultra Wide SCSI II (Adaptec), 9 Disk (4 GB)
- Raid Level: RAID0
- Network: 100Mbit Ethernet
- Memory: 1.5GB
- OS: Windows NT 4.0
- Domino: Release 4.6a
We tested a system configured for 128MB of memory, and varied the NSF_BUFFER_POOL_SIZE setting from the default of no specification, to specifying a quarter of the available memory (the minimum specification) of 32 MB, to specifying half the available memory (the maximum recommended) of 64 MB. We also ran these values on a system configured for 256MB of memory, and adjusted the NSF_BUFFER_POOL_SIZE specification so that it was proportional to the total memory. You need to use caution when you raise the NSF_BUFFER_POOL_SIZE. Depending on your configuration you could see different results.We used Domino Release 4.6a for all of our tests.
Test Case #1 - Specifying the NSF_BUFFER_POOL_SIZE setting vs. using the default setting
In this test case, we accessed processor load and response time for various user loads. In one case, we configured the server so that it used the default NSF_BUFFER_POOL_SIZE setting. When you don't specify the NSF_BUFFER_POOL_SIZE, the system allocates a little less than a quarter of available memory. In another case, we specified the NSF_BUFFER_POOL_SIZE.
We executed the NotesBench workload Groupware_B, which is a close simulation of a Domino user who takes advantage of many of the collaborative features. However, all of the activities of the users don't just stress the workload. The simulation includes replication, mail activity, and network activity. We performed evaluations simulating one, 100, 150, 200, 250, 300, 350, and 400 users. While the tests executed, we used the Server.Planner Probe to monitor overall system response time.
To see the results of using the NSF_BUFFER_POOL_SIZE setting, versus using the default setting for memory, see the sidebar, "Test Case #1 Results." For our conclusion, see the section "What did we find out?"
Test Case #2 - Deciding on the amount of memory needed per user
This scenario was very similar to Test Case #1. It tested the Groupware_B workload as it simulated one, 100, 150, 200, 250, 300, 350, and 400 users. We added another system configuration for analysis: one CPU 256MB, accepting the default specification of NSF_BUFFER_POOL_SIZE. This allowed us to evaluate the memory utilization at different user loads.
To see the results of using the NSF_BUFFER_POOL_SIZE setting, and how it affects memory utilization for different numbers of users, see the sidebar, "Test Case #2 Results." For our conclusion, see the section "What did we find out?"
Test Case #3 - Rebuilding views
In this test case, we tested the performance impact of rebuilding views within a database. We used a discussion database that supported seven views. We varied the size of the database through the ranges of 8K, 16K, 32K, 64K, 128K, and 256K. We also manipulated a variety of system parameters including the size of the NSF_BUFFER_POOL_SIZE setting, the number of CPUs, and the amount of available memory. This information can help you:
- Anticipate the size of databases after they are rebuilt
- Plan for the disk requirements you need to support your users
- Influence the direction of your Domino database design by making you aware of the benefits and impact of supporting multiple databases of smaller size, or of combining them into a large database
- Evaluate the impact and requirements of supporting one or more Update tasks on single and multiple-CPU systems
To see the results of adjusting system parameters and the effect on rebuilding views, see the sidebar, "Test Case #3 Results." For our conclusion, see the following section "What did we find out?"
Back to top
What did we find out?
Our test data supports the following conclusions:
- In Test Case #1, we found that setting the NSF_BUFFER_POOL_SIZE specification to a quarter of available memory, within the tested system configuration, yielded the best response time and the lowest processor utilization. While we found improved performance when specifying a quarter of available memory (which was equivalent to leaving the NSF_BUFFER_POOL_SIZE default setting, or the amount the Domino Server self-tunes at), specifying half of available memory does not automatically improve performance. This disproves the theory that "more is better". It is also important to note that in server configurations that have lower amounts of RAM, raising the NSF_BUFFER_POOL_SIZE removes memory from the total amount of memory available for the NT server and the Domino server. Both servers need this memory to operate effectively. Thus, raising the Buffer Pool Size reduces the effectiveness and overall performance of your Domino server, configured with lower amounts of RAM.
- In Test Case #2, we found that, on average, you need 1MB per user when you plan a system with average-to-advanced users.
- In Test Case #3, we found that the NSF_BUFFER_POOL_SIZE setting had no observable affect on view or rebuild time.
Back to top
The future of server performance tests
There are many ways that you can optimize the performance of your servers. It is important that you understand the recommendations in the context of the tests that support them. This can help you decide on the hardware you should buy to set up or upgrade your servers. It can also help you to better understand the price/performance trade-offs you can make. Then you can make informed decisions to reduce delays and optimize your server's performance. You can help your server conquer the world!