Algorithms to Live By- Part 3, Scheduling
“The secret of getting ahead is getting started. The secret of getting started is breaking your complex overwhelming tasks into small, manageable tasks, and then starting on the first one.”
– Mark Twain quoted in Getting Things Done
“If you have to eat two frogs, eat the ugliest one first. This is another way of saying that if you have two important tasks before you, start with the biggest, hardest, and most important task first.”
– Brian Tracy, Eat That Frog!
In this third installment of our Algorithms to Live By series, we are going to touch on a subject near and dear to my colleague Chris’s heart, productivity (you can check out his Not To Do List here). But we’re not going to call it “productivity.” Mostly because this much-ballyhooed concept has spawned far too many books and articles already. And the result is a barrage of conflicting advice, as in the example of the two quotes above. Part of the issue is that these books tend to speak metaphorically (few of us are employed to consume amphibians) and even the concept of “productive” is rarely defined parametrically. So let’s discuss instead a related, but far more concrete topic – task scheduling and its optimization.
The simplest scheduling problem to solve is one where you have a single machine (or just yourself as in the case of the self-help books) and a set list of tasks. If your definition of “productivity” is efficient time management, you only need to know one thing – the task order is irrelevant. So eat dessert first then the frog or whatever floats your boat.
Given these parameters, the definitive advice on scheduling follows from The Mad Hatter in Alice’s’ Adventures in Wonderland: “Begin at the beginning. And when you come to the end, stop.”
But alas, life is more complex than this simple case. Even if you have just one machine to schedule and not an office full, the wrinkle in the scenario above is that most of us will never have enough time to complete all tasks. But before we hop right along to algorithmic solutions, we must make our goals specific. By specific, I mean we need to define which parameter it is we wish to optimize.
For example, if we want to maximize the number of things we get done, the solution is easy – just order by the shortest processing time. The frogs can wait their turn to get eaten. How about if we instead prefer to minimize lateness? Again, this is simple, just order by due date.
But a bit of interesting complexity arises if, instead of minimizing a time measure of lateness, we would prefer to minimize the number of tasks that are late. This makes sense as we would then be keeping the number of people barking at us as small as possible!
To accomplish this, we can use Moore’s scheduling algorithm. Here we start by queuing up via earliest due date and, as soon as something is going to be late, toss out (or get help with) the largest task. We keep doing this until our scheduler is back to running like a Swiss train. It’s an interesting solution to consider when you consider outsourcing tasks. The first inclination may be to send out the simplest tasks, but this algorithm suggests a different approach is optimal.
None of this, however, takes into account that tasks nearly always have differing levels of importance. Whole books get written on this subject, but it is a fairly simple thing to account for with scheduling. When ordering by the task length, it can be weighted for priority by dividing the time by the importance. This can be as simple as setting 1 = low, 2 = medium, and 3 = high priority tasks, or the importance could even be expressed as a monetary value for each task. This would ensure that the biggest bangs for your time bucks are first in the queue. As new tasks come in, this analysis will be able to assign a clear place in line for it.
There are, however, a few caveats with the above scheme. The first one famously brought the Pathfinder spacecraft to a confused, dithering halt on Mars until scientists solved the issue. (Who among us have not had such days!)
The cause was a problem of priority inversion. This issue occurs when there is a partially completed low priority task is tying up a key system resource. The unavailability of this resource then blocks a new high priority task from starting. In this case, all that will ever get done are medium priority tasks. The low priority task will never outweigh the high priority task to make it to the top of the stack so the system resource can be freed up. And the high priority task can’t get done until the low priority one gets out of the way. I am sure you can think of many analogous life situations like this!
The solution here is what is called priority inheritance. If a lower priority task is blocking a higher priority one, it must assume the priority level of the task it is blocking. For example, once that funny noise coming from your engine you kept ignoring prevents you from driving anywhere, getting your car to a service station to get it fixed inherits a higher priority!
The second problem can occur with the addition of new tasks to the queue. According to our algorithm, if the importance-weighted time of our new task is less than the one we are currently working on, we should switch tasks. But in an active, dynamic environment, this leads to a problem known in computer science as “thrashing.”
If our task list is constantly changing order, we will flail about ineffectually and get nothing done. This is an important concept that has been covered in many books and articles as well, under the heading of “task switching costs.”
Thrashing too has a simple algorithmic solution. Rather than switch task order for any and all tasks with a higher priority, set a percentage threshold by which the new task must exceed the current one. By using a % rather than a discrete value, you will ensure that very high priority tasks will not be interrupted unless it is a flaming emergency. You can also calibrate the percentage to match your own personal “boot-up” time needed to switch tasks. Another very helpful computer science concept is that of interrupt coalescing. Here, the quicker, low priority tasks are grouped together to batch the switching. Tips such as looking at e-mails a limited number of times per day are familiar examples of interrupt coalescing.
What we are really trying to do here is balance our responsiveness and our throughput to the particular requirements of our work environments. Thinking about scheduling algorithmically is helpful because it provides a rational framework for setting priorities.
It is also useful to keep in mind that everyone we interact with is also attempting to solve similar scheduling problems. We may get annoyed when our computers are slow to respond because of background tasks, but we don’t take it personally. Instead, we might check to see what processes are occupying more than their fair share of CPU time. Perhaps we should use the same approach with our colleagues that aren’t replying as quickly as we’d like, i.e. ask how we can help!