One of the most difficult among the fundamental questions of computing is: "How long will it take for my computation to produce results?" At first blush, this seems straightforward enough — we could apply analytic methods we learned from algorithms analysis to count up the various operations and how often each one is invoked, ending up with a formula that tells the total number of operations of each kind involved. Then we combine that with the actual operation times from our machine to calculate the total running time. Unfortunately, this approach does not work for real systems. The reason is that real systems are shared by many users. Some of the resources that one user needs may be temporarily unavailable while another user controls the resource. The total waiting time is the sum of the user's algorithm execution time plus all the queueing delays from waiting in lines for resources. These delays are random and the best we can know is their probability distributions. Computer science was forced to adopt and adapt the methods of queueing theory so that it could answer the performance question for modern operating systems and networks.
|Publisher:||Association for Computing Machinery (ACM)|