In this article we dwell on four aspects of database performance. There are: hardware performance, configuration performance, SQL statement execution productivity and server-side application effectiveness.
The first case deals with a few different hardware configurations: database server, communication environment or event client side system. So, to find best configuration we need to understand a set of activities that is typical for hardware to be optimized. It is number of database server clients, database schema/structure, typically operations (queries, data loading, etc).
Once we have this information, we can make "test project". The test project with related stress testing software provides required job for server and another environment under performance testing. Most important property of the stress testing tool is comparable report generation. I.e. it must provide internal or external way to compare result of the few test project executions against different hardware environment.
At this point we are ready to start. If necessary, we should create database schema and load some real data or run test data generator against created database schema. The performance test is iterative process: run test project, save report and change configuration (add RAM, replace server's CPU, change network router, etc). Now we have a set of reports, one per tested hardware configuration, and ready to compare them to find better.
The advanced stress testing tools like DTM DB Stress can not only measure database environment state during test project execution. They also allows users or system administrators to measure CPU loading (for both server and client side), memory usage, number of locks and other parameters bases on "performance counters". If we have two configurations with same test project execution time but the first uses 30 present less of system memory, we should prefer the first one.
The described method allows the system administrator to find optimal configuration. However, we should accentuate once again that optimal configuration depends on a few parameters: test database, number of clients and set of executes scripts. For example, RAM size is important to cache large-scale tables when CPU performance for complex data conversions.
Also, different DBMS software has different hardware requirements for the same database schema. It is very important to understand your actual state of affairs instead of run abstract tests.
The only difference of the second case (configuration tuning) is changes area. In this case we modify database system settings like "memory usage strategy", "CPU affinity mask" or even database system file location. We strongly recommend rebooting your server after each configuration settings. It helps to prevent CPU, operating system and database cache state influence.
The third case is mode difficult because you should core set of SQL statements that most important for your application. Also, to find optimal configuration you can change more parameters: SQL statement itself, server side indexes, related code like triggers, etc. Also, this case has two sorts: with or without data fetching to client system. You can face with fast execution but large data set to be fetched very slowly as well as with slow execution with fast fetching.
The core of the performance testing is the same as for 1st and 2nd cases. But we will change our test project to modify SQL statement or fetching mode instead of server side (hardware or software) modifications. Of course, if you need to find optimal index set or properties, server side changes is required as well. However, you can make mentioned changes in your test project as well. The prologue and epilogue SQL scripts provide a fast way to alter index, enable or disable table's trigger or do something else.
The SQL statement execution plan can provide good optimization ideas to experienced user. The DTM DB Stress tool has plug-in module that will collect the plan and include it to execution report automatically.
The last case deals with server code: stored procedures, triggers, functions. We guess the main problem of this kind of optimization is dependencies between objects understanding. The small change of some function can cause dramatic performance increasing or decreasing. The user must find great road map of changes to achieve his or her goals.