Just to be clear, I wasn't recommending that you set the min and max the same as a solution - just explaining what happens. If there are no errors and no bad effect from processes restarting, my guess is that there are merely times when the load is triggering a new process (expected) and then one or more of the original processes are being terminated (also expected) when the peak is past. You can look at the metrics of current, peak and total processes to confirm that is the case.

For instance, let's say your min is 8 and your max 12 - look at your peak processes to see if you ever exceeded your min and, if so, by how much. Exceeding the min is not a bad thing, it's the way process pooling is supposed to work. So lets say we see a peak of 10 - we now know that there were times when the load exceeded the capacity and we needed two additional processes to handle it.

Now look at total. If we see that total also equals 10 - we know that the peak load only exceeded the capacity once during the observation period, and process pooling did it's job and handled the peak. OTOH, if total equals 20 or 30, then we know that the min pool was exceeded a number of times and we may want to watch more closely to see if the min should be adjusted a bit higher. The higher the total processes, the more often additional processes needed to be started.

In all cases, as long as there are no errors in the webapp or windows event log, the coming and going of additional processes is not a bad thing.

As Michael mentioned, by setting the mix and max the same, you are forcing the application to pend requests that can't be serviced rather than starting up additional processes to handle them. Again, in and of itself not a terrible thing (it's the way the system is designed to work) - unless it happens a lot. Use process pooling under observation to determine if the pool settings are sufficient to handle even peak loads if you want to go the route of setting min and max the same.