os211

Top 10 List of Week 08

This list is of no particular orderning, neither of importance nor chronological. It is written in the order in which I remembered them as I was writing this.

  1. LFS Speedrun
    Once again starting off the list with a cheat topic: LFS speedrun. I think it’s an interesting concept because it highlights one of the key features of the Linux community: it’s as much serious as it is unserious. For a lot of people designing systems is a slow and methodical process, and this concept takes it on its head. It’s just about as against the rules as you can go in the field.

  2. Gentoo Linux
    Gentoo is another distro in a similar vein to LFS: everything must be compiled by the user, from source. It’s also been touted for a while as one of the hardest distros to install, with the key difference from LFS that it has a package manager prepicked. I think it’s interesting that it does exist at all, because it implies that the big step of difficulty and freedom between LFS and Gentoo is a package manager, rather than something more controversial–say, systemd.

  3. Linux Kernel Scheduler v0.01
    This article gives a brief description of the very first version of the Linux kernel’s scheduler. I think it’s interesting because of the breadth of knowledge you need for it; to me it feels like a very “in-between” situation from computer organization and abstracted computing at the OS level. Moreover, it feels very humbling to have the author say that this gives it the impression of a toy OS, as, if this is a toy OS, then I have no clue what a “serious OS” requires.

  4. Compiling LinuxFromScratch - “Speedrun”
    Here’s a submission for the idea from number 1, a speedrun of LFS. This isn’t as complete–as the title suggests, it’s just the compilation part, and unfortunately the audio died. I think it’s interesting because the bulk of the process is just copypasting, and it fits something I saw in a linux forum regarding LFS, Gentoo, and Arch: LFS and Gentoo are hard to install, but it’s entirely possible to install both of them even if all you know is copypasting. Moreover, it’s still 4 hours long, which is insane. To my memory, the current fastest compile-and-install speedrun for Gentoo Linux is 2 minutes and 10 seconds. And even still, a full explanation of a Gentoo install takes just below 1 hour. It really does put into perspective the difference between these two and just how many choices one could go with when doing LFS.

  5. No. 2753: HENRY GANTT
    One diagram that I keep seeing over and over again when looking at topics related to scheduling is the Gantt chart. What’s fascinating about this is that this chart isn’t intended for use in operating systems and evaluating scheduling performance, and yet it fits so well. Moreover, that it’s used at all by people in management. This link covers the ideas of Henry Gantt, the inventor of said chart, and why it outlasted him and found use in other fields, i.e. in computing. The fact that it is thought that this is a good idea to manage people however just baffles me. It’s well-known that people don’t behave nearly as deterministically as computer processes do, and yet there are people in management who think that this is a viable way to manage people.

  6. Inside the Windows NT Scheduler
    Here’s a counterpart to the link describing the first Linux scheduler: a look inside the NT scheduler. As the Linux one, it’s the first in a series of posts describing said scheduler picked apart on an abstracted system description level. It’s pretty interesting seeing the diverging approaches the two OSes took. However I do wonder if there’s some explanation for these choices out there, as I couldn’t find any myself.

  7. A Fuzzy-Based Scheduling Algorithm for Prediction of Next CPU-Burst Time to Implement Shortest Process Next
    This paper describes a possible approach for predicting CPU burst time for SJF. After reading about SJF, I kept wondering if there’s a more effective way of doing the prediction part aside from the formulas given (i.e. average, exp average, etc.) This (very) short paper (4 pages!) describes a method of doing that using Fuzzy Systems, i.e. by classifying sequential tasks on some vague scale that ranges between a true-false signal, but never at the absolute. While it is an interesting proposal on its own, I thought a while about why the paper had no real implementation and suggested only simulations on cloud environments. And then it hit me just how infeasible such a system would be at a real operating system, considering its necessity of a knowledge base. Not to mention, the fact that such a system would throw another layer of nondeterminism to computing on an already oft-non-deterministic field, i.e. operating systems.

  8. A novel hybrid of Shortest job first and round Robin with dynamic variable quantum time task scheduling technique
    Here’s another advanced rendition of the algorithms taught for process scheduling. This paper describes a hybrid model between SJF and RR with DQ specifically for cloud computing. An interesting tidbit here is that, given that cloud computing is relatively new (discounting people who would consider timesharing as a form of cloud computing), these more hybrid and possibly more resource-demanding algorithms may be feasible in such systems. Even the one with fuzzy systems listed above might actually work if the communication line in the network of computers is fast enough for the fuzzy system to evaluate all possible tasks between systems in the cloud.

  9. Round Robin Load Balancing
    If there’s one thing that the term “Round Robin” reminds me of, it’s this. This short article describes the method of load-balancing for web servers using the round-robin method. It’s pretty interesting to see how the uses between the two fields mirror, in the way that clients are processes, resources (in the server) are resources (of computing), and the server is the operating system. Really strikes home the fact that RR grew directly for timesharing systems.

  10. CPU Scheduling
    And we end with a very helpful single-page summary of all the materials for CPU scheduling, straight from start to finish. An interesting part here is the very last line, which mentions that simulations for operating systems randomize conditions under distribution functions. This technique seems to crop up a lot on operating system development, similar to that of Fuzzing for OS security: testing the system by randomly throwing data and seeing what happens. It really does put into perspective how experimental and out-of-scope a lot of the complexity that operating systems have to deal with truly is.

That’s it for now. またね!