Load distributing algorithm



A distributed scheduler is a resource management component of a distributed operating system that focuses on judiciously and transparently redistributing the load of the system among the individual units to enhance overall performance. Users submit tasks at their host computers for processing. The need for load
distribution arises in such environments because, due to the random arrival of tasks and their random CPU service time requirements, there is a good possibility that several computers are idle or lightly loaded and some others are heavily loaded,
which would degrade the performance. In real life applications there is always a possibility that one server or system is idle while a task is being waited upon at another server.

Transfer Policy
Transfer policy indicates when a node (system) is in a suitable state to participate in a task transfer. The most popular proposed concept for transfer policy is based on a optimum threshold. Thresholds are nothing but units of load. When a load or task originates in a particular node and the number of load goes beyond the  threshold T, the node becomes a sender (i.e. the node is overloaded and has additional task(s) that should be transferred to another node).Similarly, when the
loads at a particular node falls bellow T it becomes a receiver.

Selection Policy

A selection policy determines which task in the node (selected by the transfer policy), should be transferred. If the selection policy fails to find a suitable task in the node, then the transfer procedure is stopped until the transfer policy indicates that the site is again a sender. Here there are two approaches viz.: preemptive and non-pre-emptive. Non-pre-emptive the approach is simple, we select the newly originated task that has caused the node to be a sender, for migration. But often this is not the best approach as the overhead incurred in the transfer of task should be compensated for by the reduction in the response time realised by the task. Also there are some other factors, firstly the overhead incurred by the transfer should be minimal (a task of small size carries less overhead) and secondly, the number of location dependent system calls made by the selected task should be minimal. This phenomenon of location dependency is called location affinity and must be executed at the node where the task originated because they use resources such as windows, or mouse that only exist at the node.

Load distribution algorithms can be categorized according to the taxonomy introduced by Casavant and Kuhl

Classification According to Approach

Load distribution algorithms can be classified as static, dynamic or adaptive. Static schemes are those when the algorithm uses some priori information of the system based on which the load is distributed from one server to another. The disadvantage of this approach is that it cannot exploit the short term fluctuations
in the system state to improve performance. This is because static algorithms do not collect the state information of the system. These algorithms are essentially graph theory driven or based on some mathematical programming aimed to find a
optimal schedule, which has a minimum cost function. But unfortunately Gursky has shown that the proble of finding an optimal schedule for four or more processing elements is NPhard. Dynamic scheduling collect system state information and make scheduling decisions on these state information. An extra
overhead of collecting and storing system state information is needed but they yield better performance than static ones. Dynamic load distribution for homogenous systems was studied by Livny and Melman , and the scenario of task waiting in one server and other server being idle was regarded as “wait
and idle” (WI) condition. Significantly for a distributed system of 20 nodes and having a system load between 0.33 and 0.89,the probability of WI state is greater than 0.9. Thus, at typical system loads there is always a potential for improvement in performance, even when nodes and process arrival rates are homogeneous. Load sharing facility for large, heterogeneous system is studied in Utopia Adaptive load balancing algorithms are a special class of dynamic load distribution algorithms, in that they adapt their activities by dynamically changing the parameters of the
algorithm to suit the changing system state.

Pre-emptive and Non pre-emptive Type

A pre-emptive transfer involves transfer of task which are partially executed. These transfers are expensive because the state of the tasks also needs to be transferred to the new location. Non pre-emptive transfers involves transfer of tasks which has not been started. For a system that experiences wide
fluctuations in load and has a high cost for the migration of partly executed tasks, non pre-emptive transfers are appropriate.

Load Sharing and Load Balancing

Although both type of algorithms strive to reduce the likelihood of unshared state i.e. wait and idle state, load balancing goes a step further by attempting to equalize loads at all computers. Because a load balancing algorithm involves more task
transfers than load sharing algorithms, the higher overhead incurred by load balancing types may outweigh its potential improvement.

Initiation Based

In general the algorithms are also categorized on which node initiates the load distribution activity. The variations are sender initiated, receiver initiated or symmetrically initiated (by both sender and receiver). A sender initiated algorithm was studied by Eager et. al. and a receiver initiated algorithm was studied  where as a symmetrically initiated algorithm was adopted  Moreover an adaptive stable symmetrically initiated algorithm was put forward in  and a stable sender initiated algorithm was discussed All the load distribution algorithms are based on one of more of  the types discussed above


EasyExamNotes.com covered following topics in these notes.
Python Programming ↓ 👆
Java Programming ↓ 👆
JAVA EasyExamNotes.com covered following topics in these notes.
JAVA Programs
Principles of Programming Languages ↓ 👆
Principles of Programming Languages
EasyExamNotes.com covered following topics in these notes.

Previous years solved papers:
A list of Video lectures References:
  1. Sebesta,”Concept of programming Language”, Pearson Edu 
  2. Louden, “Programming Languages: Principles & Practices” , Cengage Learning 
  3. Tucker, “Programming Languages: Principles and paradigms “, Tata McGraw –Hill. 
  4. E Horowitz, "Programming Languages", 2nd Edition, Addison Wesley

    Computer Organization and Architecture ↓ 👆

    Computer Organization and Architecture 

    EasyExamNotes.com covered following topics in these notes.

    1. Structure of desktop computers
    2. Logic gates
    3. Register organization
    4. Bus structure
    5. Addressing modes
    6. Register transfer language
    7. Direct mapping numericals
    8. Register in Assembly Language Programming
    9. Arrays in Assembly Language Programming


    1. William stalling ,“Computer Architecture and Organization” PHI
    2. Morris Mano , “Computer System Organization ”PHI

    Computer Network ↓ 👆
    Computer Network

    EasyExamNotes.com covered following topics in these notes.
    1. Data Link Layer
    2. Framing
    3. Byte count framing method
    4. Flag bytes with byte stuffing framing method
    5. Flag bits with bit stuffing framing method
    6. Physical layer coding violations framing method
    7. Error control in data link layer
    8. Stop and Wait scheme
    9. Sliding Window Protocol
    10. One bit sliding window protocol
    11. A protocol Using Go-Back-N
    12. Selective repeat protocol
    13. Application layer
    1. Andrew S. Tanenbaum, David J. Wetherall, “Computer Networks” Pearson Education.
    2. Douglas E Comer, “Internetworking with TCP/IP Principles, Protocols, And Architecture",Pearson Education
    3. KavehPahlavan, Prashant Krishnamurthy, “Networking Fundamentals”, Wiley Publication.
    4. Ying-Dar Lin, Ren-Hung Hwang, Fred Baker, “Computer Networks: An Open Source Approach”, McGraw Hill.