Introduction to distributed scheduling:-
Scheduling refers to the execution of non-interactive processes or tasks at designated times and places around a network of computer
Distributed scheduling refers to the chaining of different jobs into a coordinated   workflow that spans several computers. For example: - you schedule a processing job on computer1 and computer2, and when these are finished you need to schedule a job on computer3, this is distributed scheduling.

Locally distributed system consists of a collection of autonomous computers, connected by a LAN.
In a locally distributed system, there is a good possibility that several computers are heavily loaded while others are ideal or lightly loaded.
If we can move jobs around, the overall performance of the system can be maximized.
A distributed scheduler is a resources management component of a distributed operating system that focuses on judiciously and transparently redistributing the load of the system among the computers to maximize the overall performance.

A distributed system may have a mix of heavily and lightly loaded system.

  • Tasks arrive at random intervals.
  •  CPUs service time requirements are also random.
  • Migrating a task to share or balance load can help.
System may be heterogeneous in terms of CPU speed and resources. The distributed system may be heterogeneous in terms of load in different system.
Even in homogeneous distributed system a system may be idle even when a task is waiting for service in other system.
Consider a system of N identical, independent M/M/1 servers. Let P be the probability that the system is in state in which at least 1 task is waiting for service and at least 1 server is idle.
Let P be the utilization of each servers. We can estimate P using probabilistic analysis and plot a graph against system utilization.
For moderate system utilization, Value of P is high, i.e. at least 1 node is idle. Hence, performance can be improved by sharing of task.

What is performance? Average response time.


Load on a system/node can correspond to the queue length of tasks/ processes that need to be processed.

Queue length of waiting tasks: proportional to task response time, hence a good indicator of system loads.

Distributing load: transfer tasks/processes among nodes.

If a task transfer (from another node) takes a long time, the node may accept more tasks during the transfer time.

Causes the node to be highly loaded. Affects performance.

Solution: artificially increment the queue length when a task is accepted for transfer from remote node (to account for the proposed increased in load).

Task transfer can fail? : use timeouts.

Types of Algorithms

Static load distribution algorithms: Decisions are hard-coded into an algorithm with a priori knowledge of system.

Dynamic load distribution: use system state information such as task queue length, processor utilization.

Adaptive load distribution: adapt the approach based on system state.(e.g.) Dynamic distribution algorithms collect load information from nodes. Even at very high system loads.

Load information collection itself can add load on the system as messages need to be exchanged.

Adaptive distribution algorithms may stop collecting state information at high loads.

Balancing vs. sharing

Load balancing: Equalize load on the participating nodes.

Transfer tasks even if a node is not heavily loaded so that queue length on all

Nodes are approximately equal.

More number of tasks transfers, might degrade performance.

Load sharing: Reduce burden of an overloaded node.

Transfer tasks only when the queue length exceeds a certain threshold.

Less number of task transfers.

Anticipatory task transfer: Transfer from overloaded nodes to ones that are likely to become idle/highly loaded

More like load balancing, but may be less number of transfers.


EasyExamNotes.com covered following topics in these notes.

Python Programming ↓ 👆
Java Programming ↓ 👆
JAVA EasyExamNotes.com covered following topics in these notes.
JAVA Programs
Principles of Programming Languages ↓ 👆
Principles of Programming Languages
EasyExamNotes.com covered following topics in these notes.

Previous years solved papers:
A list of Video lectures References:
  1. Sebesta,”Concept of programming Language”, Pearson Edu 
  2. Louden, “Programming Languages: Principles & Practices” , Cengage Learning 
  3. Tucker, “Programming Languages: Principles and paradigms “, Tata McGraw –Hill. 
  4. E Horowitz, "Programming Languages", 2nd Edition, Addison Wesley

    Computer Organization and Architecture ↓ 👆

    Computer Organization and Architecture 

    EasyExamNotes.com covered following topics in these notes.

    1. Structure of desktop computers
    2. Logic gates
    3. Register organization
    4. Bus structure
    5. Addressing modes
    6. Register transfer language
    7. Direct mapping numericals
    8. Register in Assembly Language Programming
    9. Arrays in Assembly Language Programming


    1. William stalling ,“Computer Architecture and Organization” PHI
    2. Morris Mano , “Computer System Organization ”PHI

    Computer Network ↓ 👆
    Computer Network

    EasyExamNotes.com covered following topics in these notes.
    1. Data Link Layer
    2. Framing
    3. Byte count framing method
    4. Flag bytes with byte stuffing framing method
    5. Flag bits with bit stuffing framing method
    6. Physical layer coding violations framing method
    7. Error control in data link layer
    8. Stop and Wait scheme
    9. Sliding Window Protocol
    10. One bit sliding window protocol
    11. A protocol Using Go-Back-N
    12. Selective repeat protocol
    13. Application layer
    1. Andrew S. Tanenbaum, David J. Wetherall, “Computer Networks” Pearson Education.
    2. Douglas E Comer, “Internetworking with TCP/IP Principles, Protocols, And Architecture",Pearson Education
    3. KavehPahlavan, Prashant Krishnamurthy, “Networking Fundamentals”, Wiley Publication.
    4. Ying-Dar Lin, Ren-Hung Hwang, Fred Baker, “Computer Networks: An Open Source Approach”, McGraw Hill.