next up previous
Next: The Constant Bandwidth Server Up: Constant Bandwidth vs Proportional Previous: Earliest Eligible Virtual Deadline

   
Background on constant bandwidth resource allocation

Dynamic real-time systems try to guarantee timing requirements by reserving each task (at creation time) all the resources it needs for its execution. If such a guarantee cannot be done, the task is rejected. To perform admission control, all the resources used by a task have to be known a priori and the temporal requirements have to be explicitly declared.

A task $\tau_i$ is viewed as a stream o jobs Ji,j, each of them characterized by an arrival time ri,j and an absolute deadline di,j, meaning that the correct execution of the task requires that each job execution finishes within its deadline. Once the admission test is passed, tasks are scheduled by absolute deadlines (Earliest Deadline First, EDF), by periods (Rate monotonic, RM) or by relative deadlines (Deadline Monotonic, DM).

Unfortunately, some of the task parameters (for example the jobs' execution times) cannot be exactly estimated before execution. It is impossible to predict how much CPU time a section of code will need to execute on different machines (and the execution time can vary because of caches, DMAs, hardware pipelines...). For this reason the admission test (and the resource reservation) is performed based on declared values; if each task respects the declared requirements, each of its jobs will finish before its deadline, but if a task requires more resources than declared, the guarantee mechanism is perfectly useless.

Consider the CPU example: when a task is created, its Worst Case Execution Time (WCET) Ci and minimum interarrival time Ti have to be declared, so an admission test can be performed (generally based on the utilization factor $U=\sum_{\tau_i \in \Gamma} \frac{C_i}{T_i}$). But if a job's execution is longer than Ci time units (overloaded task), the schedulability of the task set is compromised and other tasks can miss their deadlines. So a safe solution is to monitor the execution time of each task and stop a task if it executes more than the declared WCET.

Although this technique can be adopted in embedded systems (where execution times can be computed a priori), it is less attractive for realizing general purpose operating systems (such as Windows or Unix). For this reason, quantum based PSA is preferred for supporting soft real-time applications.

A solution to the problem is given by the Resource Reservation [12] and (limited to the CPU resource) the Processor Capacity Reserve [9] abstractions: a task is allowed to use a resource for Ci time units each Ti; after this time the reserve is depleted and will be recharged at the next period.

Constant Bandwidth allocation [1] uses a similar solution by introducing the concept of demanded bandwidth, calculated on absolute deadlines. Each job Ji,j is characterized by a request time ri,j and an absolute scheduling deadline di,j, that can vary during its execution. We define the demanded bandwidth of task $\tau_i$ as

 \begin{displaymath}B_i = \max_{t_1,t_2 : t_2 > t_1} \frac{D_i(t_1,t_2)}{t_2 - t_1}
\end{displaymath} (5)

where Di(t1,t2) is the time demanded by $\tau_i$ in the interval [t1,t2] (i.e., the sum of the execution times of jobs Ji,j having release time $r_{i,j} \leq t_1$ and served with deadline $d_{i,j} \leq t_2$). Each task is scheduled such that its demanded bandwidth Bi never exceeds the reserved bandwidth Ui. Thus if $\sum_{\Gamma} U_i \leq 1$, it is guaranteed that it will receive at least the demanded bandwidth.

In [1], it is shown how it is possible to realize a Constant Bandwidth CPU allocation using a dedicated aperiodic server (the Constant Bandwidth Server, CBS) for each task.



 
next up previous
Next: The Constant Bandwidth Server Up: Constant Bandwidth vs Proportional Previous: Earliest Eligible Virtual Deadline

1999-02-16