Duration of performance test 1: 11064 Duration of performance test 2: 12026 Frequency of data 1: 60.130434782608695 Frequency of data 2: 60.13 Data 1: DB Disk: 4.553785326086956 % , DB CPU: 31.16754456521739 % WebApp Disk: 6.755601630434783 % , Web App CPU: 76.70626195652174 % Data 2: DB Disk: 4.97898 % , DB CPU: 83.19307049999999 % WebApp Disk: 6.6787675 % , Web App CPU: 70.7346175 %
Question 4 b) - 4) Database Management Server and Web Server
Current DB: CPU utilization of web server processes: 34.76513315217392% CPU utilization of app server processes: 6.533450918238821% CPU utilization of database processes: 30.121618097826087% Big DB: CPU utilization of web server processes: 31.922464499999997% CPU utilization of app server processes: 6.044606949220494% CPU utilization of database processes: 81.916143%
Question 4b) - 4) Application Server Stuff
% User Time 1 5.726243 dtype: float64 % User Time 2 5.302384 dtype: float64
Bookzilla test engineers have told you that there is very little virtual memory activity in their systems and that you need not worry about this factor during performance evaluation. Based on the perfmon data, do you agree with this assessment? Provide concrete reasons for your view.
case 1: page faults/sec: 22.768122282608694 isp\page faults/sec: 796.495847826087 page input/sec: 0.20137430519480518 isp\page input/sec: 0.08144670731707317 page output/sec: nan isp\page output/sec: nan case 2: page faults/sec: 175.48390850000004 isp\page faults/sec: 742.9731349999998 page input/sec: 1.4887308368421048 isp\page input/sec: 0.0598494642857143 page output/sec: 3.9143164705882345 isp\page output/sec: nan
For case 1, on the database service, there does not seem to be a lot of virtual memory activity because the average page faults per second are 22.8. However, on the web and application server, the average page faults per second are 796.5. Page faults are caused by virtual memory activity. A similar case can be seen in case 2, 175.5 on the database service and 742.97 on the web and application server.
For case 1, the page input/sec for both the database service and the web and application server are less than 1. For case 2, the page input/sec for the database service is over 1 but still not high and is also less than 1 for the web and application service. Based on the page inputs/s second being low we agree that there is not much virtual memory activity.
Do you agree with the thread/process concurrency information provided by Bookzilla for the Web, application, and database servers? Provide a justification based on the perfmon data.
##### Application Server
Bookzilla mentioned that web server processes are assigned 1000 threads, however it is evident from the data that there are 1008 threads used for ips-01 as seen here
Thread count web server: 1008.0
There is however additional threads beyond the ~1000, as one is assigned to each server instance such as for \\isp-01\Process(server#0)\Thread Count. In addition to the 8 threads assigned to 8 threads assigned to the srvctrl process on isp-01
isp01 server control process: 1.0 isp01 server #1 threads (same for other server numbers): 8.0
Adjust to mention the 16 processes for the 16 threads section
##### Database Server
The database management system process is mentioned to have 33 concurrent threads, which is properly reflected within the perfmon data
Thread count database: 33.0
The thread count provided by BookZilla is failing to account for the true number of threads acting on the application/web tier. It is however accurate for the database tier.
Therefore, we do not agree with BookZillas statements about the thread count
You will observe a slight discrepancy between what you computed in 4.b.4 and 4.b.3. For example, although the database management system process was the only process using the DB machine, its CPU utilization (computed in 4.b.4) is less than that of the CPU utilization of the DB machine computed in 4.b.3. Provide possible explanations for such mismatches.
It could be losing processer usage time due to the transition between the actual machine system and the DB management system process.
Another reason could be the database machine required more utilization to create/call the DB management system process.
Due to multithreads being present, the DB Machine may be double counting CPU utilization thus it is measurements is higher than the actual process
The database management system process may be waiting for I/O operations to complete, such as reading or writing to disk, which reduces its CPU utilization.\.
Let us now focus on application-level metrics such as throughput and response time. Compute the following for both Current DB and Big DB:
The per-request mean response time is the sum of the time to establish a connection with the server, wait till the first byte of the response, and ultimately obtain the last byte of the response.
Mean Response Time = Time To Open Connection + First Byte Time + Last Byte Time
Case 1 Mean Reponse Time: 1.4906 s Case 2 Mean Reponse TIme: 2.0198 s
The throughput in request completions/second.
Throughput = Total Replies / Test Duration
Case 1: 89592 replies / 11136.199 s = 8.04511485472
Case 2: 90399 / 12110.736 = 7.46436880467
The mean think time between successive requests from a customer
From D2L: (Mean Connection Time - (Mean Replies * Mean Response Time)) / Mean Replies
Case 1: (368.5145 - (9.248 * 1.4906)) / 9.248 = 38.357s
Case 2: (375.8772 - (9.253 * 2.0198)) / 9.253 = 38.602s
The mean number of concurrent customer sessions in the system. (Hint: You need to use Little’s law for this)
From D2L: Average Sessions = Throughput * (Mean Response Time + Think Time)
Case 1: 8.04511485472 * (1.4906 + 38.357) = 320.578 Sessions
Case 2: 7.46436880467 * (2.0198 + 38.602) = 303.216 Sessions
Bookzilla’s test engineers have told you that the network was lightly utilized and that it can be ignored as a factor in your study. Is there any data available to back up this claim?
The Net/IO field in the case 1 summary is registering 54.1 KB/s which is barely any network traffic compared with the maximum 100Mbps of their connection. Same for case 2 where the Net I/O is 51.3 KB/s. Hence it seems fair to ignore the network utilization as a factor in this calculation.
From the analysis in a), discuss the implications of supporting a larger catalog of books on the experience of an end-user of Bookzilla.
Answer: Based on part a) we can see that supporting a larger book catalogue as simulated with the case 2 data mean that the average response time increases substantially with a 36% increase over the smaller database. Throughput decreases from roughly 8 replies per second to 7.4 replies per second and the number of concurrent sessions dropped from approximately 321 sessions to 303. This also had a very small effect on the average think time but it was a very small relative increase. Overall for the effects on an end-user the most important metric here is the mean response time which saw a substantial increase with the larger database, however it is still a relatively low wait time of around 2 seconds so it's impact should only be felt for extremely large requests.
a) 1, 2, 3, 4
Apply the utilization law to compute the mean demands placed by request on the following resources
The values for Data 1: 4.553785326086956 31.16754456521739 6.755601630434783 76.70626195652174 The values for Data 2: 4.97898 83.19307049999999 6.6787675 70.7346175 Now calculating Demand for each resource based Law and Analysis: Utilization(U) = Throughput(X) * Demand(D) Case 1 demand results: WebApp CPU Demand: 0.09534514216601749 , DB CPU Demand: 0.038740956627774756 WebApp Disk Demand: 0.008397147526702283 , DB Disk Demand: 0.0056603111432465 Case 2 demand results: WebApp CPU Demand: 0.09476302598519203 , DB CPU Demand: 0.11145359062101963 WebApp Disk Demand: 0.008947531499008333 , DB Disk Demand: 0.006670329575469203
Current DB mean per-request demand for webserver is: 4.321334139487124 Big DB mean per-request demand for webserver is: 3.96798812927284
Current DB mean per-request demand for app server is: 0.8121132278730666 Big DB mean per-request demand for app server is: 0.7513495275600366
Current DB mean per-request demand for app server is: 3.7440880138776382 Big DB mean per-request demand for app server is: 10.974289339609006
Question: You will observe a slight mismatch between the total demand you calculated for a resource in 6.a and the sum of the demands placed on that resource by processes using that resource (6.b). Explain the reason for this mismatch.
Question: Compare the resource demands you computed for the Current DB and Big DB scenarios. Discuss reasons for any significant differences that you observe. Discuss whether these demands provide us any insights on the kind of additional resources needed to satisfy the planned expansion of Bookzilla.
Demands for Case 2 seems to be higher than that of case 1, especially in the CPU demand. This is because of the for case 2 data, the size is based on the expansion plan (not for the current system they have), thus due to the huge and inappropriate size of the data for the current system, the CPU and disk is heavily utilized and stressed. U = X * D and U/X = D, so if U is much higher, Demand is higher as well.
To counteract the high U, Bookzilla needs to increase X, and to that, they can faster processers and potentially improve load balancing between the processors.