next up previous
Next: Duality between PSA and Up: Parallelism between PSA and Previous: Similarities

Differences

There is a fundamental difference between the two theoretical models from which Proportional Share and Constant Bandwidth derive: comparing Equation (4) with Equation (6) we can see how a proportional share scheduler tries to maintain the ratio $\frac{executed(t_0,t_1)}{t_1 - t_0}$ constant over any time interval [t0,t1], while a constant bandwidth scheduler maintains this ratio constant only between deadlines.


  
Figure 2: Examples of Proportional Share and Constant Bandwidth schedules
\begin{figure}\centerline{\psfig{file=cfr.eps,width=10cm}}
\end{figure}

We believe that imposing a fair execution over any time interval overconstraints the system, since in most soft real-time or multimedia application it is important to schedule tasks before their deadlines, not to schedule them in a fair way. For example, in a video or audio player it is important to decode a frame before the start of the next frame, not to decode it fairly. A Proportional Share scheduler must schedule a task in this fair way to respect its QoS requirements because the scheduler does not know an important task's parameter like the period (or the deadline).

Another fundamental difference between the Proportional Share paradigm and the Constant Bandwidth paradigm is that the former allocates the resources in time quanta, while the latter does no require so (it is fully preemptive). This makes Proportional Share easier to implement in a conventional operating system (such as Unix or Windows), whose scheduler is quantum-based, but introduces some limitations: for example, tasks can be activated only at quantum boundaries and tasks' periods must be multiple of Q. Moreover, a quantum-based allocation algorithm causes an overhead (an interrupt is generated at each quantum boundary) and a larger allocation error. This error results in the lag: if it is bounded, it is possible to perform an hard deadline guarantee at the cost of a bandwidth waste. The admission test becomes

\begin{displaymath}\sum_{\tau_i \in \Gamma_{Hard}} \frac{C_i + max\{lag\}}{T_i} \leq 1\end{displaymath}

so the allocation error introduces a factor $\frac{Q}{T_i}$ for each hard task.

The fairness property enforced by Proportional Share schedulers generates a larger number of preemptions, as shown in Figure 2, and the performance of the system can be penalized by the additional preemption overhead (in a real system, context switches could take some time!). In general, a Proportional Share algorithm causes a preemption at each tick: if $\delta$ is the time needed to perform a context switch, the context-switch overhead can be expressed as $\frac{\delta}{Q}$, so the admission test becomes

\begin{displaymath}\sum \frac{C_i}{T_i} \leq 1 - (\max\{lag\} \sum \frac{1}{T_i} + \frac{\delta}{Q}).\end{displaymath}

Another problem with the Proportional Share paradigm is that it does not model event-based systems intuitively. For example, a task activated by an external event (a task that manages data from the network, or that have to respond to an interrupt) can only be modeled as a periodic task, and it is not clear how to assign the weight or the share to this task. On the other hand, using CBA, we can reserve a given bandwidth to an aperiodic event-driven task and use the server parameter Ts to model the expected interarrival time. Using a CBS to serve the task, it is also simple to perform a deterministic or stochastic guarantee on the response time.


next up previous
Next: Duality between PSA and Up: Parallelism between PSA and Previous: Similarities

1999-02-16