My Best Teaching Is One-on-One

一対一が僕のベスト

Of course, I team teach and do special lessons, etc.

当然、先生方と共同レッスンも、特別レッスンの指導もします。

But my best work in the classroom is after the lesson is over --
going one-on-one,
helping individual students with their assignments.

しかし、僕の一番意味あると思っている仕事は、講義が終わってから、
一対一と
個人的にその課題の勉強を応援することです。

It's kind of like with computer programs, walking the client through hands-on.
The job isn't really done until the customer is using the program.

まあ、コンピュータプログラムにすると、得意先の方に出来上がった製品を体験させるようなことと思います。
役に立たない製品はまだ製品になっていないと同様です。

Saturday, October 25, 2014

The root problem with systemd (according to me, anyway).

I'm wasting too much time posting on this stuff, but I'm going to try to lay out my complaints against systemd and the way it has been induced into the Linux community.

This comes from an exchange on the debian lists, where a member of the community is asking people to help debug systemd in jessie before the release.

If I could afford the hardware to replace my netbook that is no longer portable, or even if my tablet were not locked down and legally driver-hidden, I would be dual booting and probing for bugs in my spare time between classes and while I'm on the train.
 

Not that I really have any spare time.
 

Someone asked me off-list if I were seeking absolution. I assume the question was relative to my claiming that I don't have the resources to help test.

And that seems to me to be an odd question. Why should I feel anything even approaching guilt or remorse for not wanting to jump onto this train?

But, really, why should I even want to test systemd?
That question still has not been sufficiently answered to justify Red Hat in their completely arbitrary decision to push their community (Fedora) onto this train.

I know that the fact that they did so puts the rest of the larger Linux community in an awkward position. That the debian developer community wouldn't try to find their way around pushing the entire debian community onto the same train is still a significant concern to me.

systemd is not just an init, and, as an init, it takes an architectural approach that is rather disruptive.

Disrupting the fluff end of a market is one thing, but this is not a superficial change like the ipod was in the consumer market. This goes deep to the core of the community of Linux users.

Arguments about whether the architectural changes will turn out to be justified in the end aside, the correct thing for any distribution with a broad user base should have been to make an internal fork, similar to the fork that is kfreebsd in the debian community.

Just dumping it into testing to be processed into the mainline was irresponsible behavior on the part of Red Hat management.

I know that in Fedora they have a policy of not doing forks like kfreebsd, but this was (and will continue to be) disruptive. So disruptive that, if they weren't willing to change policy and set up the separate test community infrastructure, they should have simply refused. They could have told Poettering and his friends, "No. We are not going to sponsor your shiny new vision of an OS, nor your plan to push a shim on top of Linux so you can slide Linux out and slip your re-implementation of VMS or Multics underneath."

You know, that's really not a bad idea, developing a way to get other OSses from a bit outside the Unix traditions underneath the userland applications that have sprung up around Linux. But proper engineering methods should be used in the process.

I personally think the burden should have been on the shoulders of the freedesktop.org group to set up their own distribution. And they should have accepted that burden and not been so anxious to obtain involuntary testers. And they should have been willing to listen to constructive criticism about the design of their software, even if it hurt a bit.

That was the sane way, and the friendly way. Lots of people have set up their own private spins of Linux. It's not that hard, just takes a weekend or two. There's even the Linux-from-Scratch group that will walk beginners through the steps, although we should assume Lennart Poettering, Kay Sievers, and their friends (Did they really call themselves a cabal or were they just being sarcastic?) would not need to be walked through it.

And this is really the core of the argument:

If Red Hat had been willing to do this up-front and set up a parallel distribution (maybe call it Fedora-d) where the engineering questions could be worked out and the shim they are calling an init could have been re-factored more freely when the inevitable design errors were discovered, there would have been grumbling and even a few flame wars. But there would have been nothing like the push-back we have now.

And debian would have been more at liberty to set up a debian-d similar to the way kfreebsd is set up.

And the engineering problems could have been solved with engineering solutions, not with politics.

So, why should I even want to test systemd, even though someone is attempting to push me onto that train?

If I just boot Debian Jessie, I would be testing systemd. Sure.

But I don't really know if I can boot it.

Available hardware? I have one notebook that I'm using for experimenting, an old thinkpad, 256MB total RAM, 32 bit processor, 20GB HD. It runs openbsd (XFCE) okay, although I'm still working out how to get one of the available IMEs to work. A five-year-old vine linux install was a little heavy, but it did work. It is is sitting on the borderline of the level recommended for Debian's current stable OS, Wheezy.

Is it going to be worth wasting the time to see if Jessie runs at all on that? Maybe, but I really don't want to wipe openbsd off of it to do so. I've got things I need to do with that.

Well, sure, it would be pushing systemd beyond certain limits, into functional regions where some of the bugs I expect will definitely show up, but would my bug reports get anything more than a "fix by upgrading hardware" response? Not from what I've seen so far.

One other option, I could load a stripped-down jessie on a 4G SD card and see how it runs on the old netbook with the broken screen, an almost reasonable processor, and 1G of RAM. But, again, wouldn't bug reports tend to get a "fix hardware" response?

It has been asserted time, and time again, on the debian list, that, if I am not finding bugs and reporting them, I'm at fault for the bugs that don't get fixed before the release.


If I add up the time it has taken for me to try to "comment" intelligently, and the time it will most likely continue to take, it would in fact be easier and take less time to just borrow the money to get a new netbook and start digging out the more serious flaws in systemd in my spare time, reporting them to the systemd community, and posting them elsewhere where they will be seen when the systemd community is too arrogant to deal with them (as they have been known to be in the past, see the systemd mailing list).

But
why, exactly, is it supposed to be my responsibility to contribute, when I fundamentally disagree with the direction being taken?

There are some who say that, if systemd is as bad as I and others are guessing, the defects will be made obvious through testing in as many configurations as possible. And if the design itself is fatally flawed, fixing the bugs that are found should not be trivial.

(Many of the bugs so far have not been trivial. No surprise there.)

But, remember, that's what many of us said or thought about MS-DOS, and then about MS-Windows. That is precisely the attitude that allowed Microsoft to get its de-facto monopoly.

I am not saying testing is unnecessary to prove that systemd is fatally flawed, what I'm saying is, if you are not willing to look at and deal with the flaws, testing will not help.

Okay, for those who think you are not technically inclined, who have gotten to this point anyway, how do you learn how to find the flaws?

Learn how to program. 

Don't learn from Microsoft. They'll just teach you how to line up orders that you give to machines and people, as if giving orders and getting your way were all there is to programming.

(One of these days I'll get my Programming Is Fun sites up -- at present, on sourceforge.jp, and blogspot -- but right now there isn't much there.)

One of the primary things anyone should try to understand before trying to become involved with programming on a professional basis is that there are classes of problems for which programmed solutions are not appropriate, for both technical and ethical reasons, if not for reasons involving how we define ourselves as humans.

Monday, October 13, 2014

Thinking about an Ideal Operating System Init System

[JMR20170414: Posted the meat of this in my defining computers blog, here: http://defining-computers.blogspot.com/2017/04/how-proper-init-process-should-work.html.]

Just daydreaming, here. The systemd brouhaha is not dying down, of course. But proponents keep saying, "If you don't like it, fork it."

I don't particularly want to fork systemd. The so-called sysv-init was good enough until Red Hat decided that Linux had to be "management friendly" or some such really strange goal. And decided to abandon proper engineering practices to achieve their goals.

Proper engineering goals for bringing a change as drastic as systemd into an OS is to split development. (In the open source world, that's called a "fork".) Develop the stable, existing OS and the new one in parallel, let users experiment with the new one and use the stable, existing one for real work.

And keeping the stable one means you have something reasonable to compare the new one against when looking for the root causes of difficult bugs in the new system.

No, Red Hat Enterprise is not "good enough" as a stable OS. That's their product. They needed a stable Fedora and a systemd Fedora, separate from their product. Product should not participate at that level of development. (And Red Hat really hasn't used it as such.)

But that's not what this rant is about. systemd is a bad design for any and all the functions it does. One of the biggest problems is that it tries to do way too much at process id 1.

What does systemd do? It is not just a replacement for the venerable sysv-init. In addition, it tries to enforce all the things that are "supposed to be happening" in a Linux OS. What the kernel does is apparently not enough. What the various processes in a traditional Linux OS do is apparently not enough.

Well, okay, management does not want to be entirely at the mercy of their IT department, and that's a reasonable desire.

If they were willing to learn enough information science to do their job of managing people correctly, they'd know enough to not be at the mercy of their IT departments, too. But in the real world, too many managers are scared of actually knowing what's happening. So they need bells and whistles on their computers so they can think they know what's happening.

There is a right way to do this, even though it's not the best goal. And it fits within the traditional way of doing things in Linux or Unix.

The right way to do it is to develop machine functions to support the desired management and reporting functions. In the *nix world, these support functions are called daemons.

Not demons, daemons.

Daemons in mythology are unseen servants, doing small tasks behind the scenes to keep the world running. (Okay, that's an oversimplification, but it's the meaning that was intended when early computer scientists decided to use the word.)

In computers, we could have called them "angels", I suppose, but the word "angel" focuses on the role of messenger more than general laborer. The word "service" is also not quite right because what the daemons do is the work to make it possible to provide the service. And the word "servant" would also cause false expectations, I think. Servants are human, and too functional.

In engineering, there is a principle of simplicity. Make your machines as simple as possible. Complexities invite error and malfunction. Too much complexity and your machine will fail, usually at very inopportune moments.

Computers are often seen as a magic way to overcome this requirement of simplicity, but the way they work is to manage the complexity in small, simple, chunks. Inside a well-designed operating system, complex tasks are handled by dividing them into small pieces and using simple programs to deal with the pieces, and simple programs to manage the processes. This is the reason computer scientists wanted to use a word that could, in English, by used to evoke simple processes.

This is the right way to do things in computer operating systems. Break the problem into simple pieces, use simple programs to work on the simple pieces, and use simple programs to manage the processes.

One place where the advantage of this divide-and-solve approach is in the traditional Unix and Linux "init" process. This process starts other processes and then basically does little other than to make sure the other processes are kept running until their jobs are done.

So, when we think we are solving hard and complicated problems with computers, we should understand that the computer, when programmed correctly, is usually seeing those problems as a simple, organized set of simple problems and using simple techniques to solve them one at a time, very quickly.

(There are some exceptions to this rule, problems that programmers have not figured out how to divide up into simpler problems, but when we fail to use the divide-and-solve approach in our programs, our programs tend to fail, and fail at very inopportune moments.)

Now, if I were designing an ideal process structure for an operating system, here's what I would do:

Process id 1, the parent to the whole think outside the kernel, would

  • Call the watchdog process startup routine.
  • Call the interrupt manager process startup routine.
  • Call the process resource recycling process startup routine.
  • Call the general process manager process startup routine.
  • Enter a loop in which it 
    • monitors these four processes and keeps them running (Who watches the watchdog? -- Uses status and maintenance routines for each.);
    • collects orphaned processes and passes their process records to the resource recycler process;
    • and checks whether the system is being taken down, exiting the loop if it is.
  • Call the general process manager process shutdown routine.
  • Call the process resource recycling process shutdown routine.
  • Call the interrupt manager process shutdown routine.
  • Call the watchdog process shutdown routine.
All other processes, including daemons, would be managed by the process manager process.

systemd and traditional init systems manage ordinary processes directly. I'm pretty sure it's more robust to have them managed separately from pid 1. Thus the separate process manager process.

There are a few more possible candidates for being managed directly by pid 1, but you really don't want anything managed directly by pid 1 that doesn't absolutely have to be. Possible candidates, that I need to look at more closely:
  • A process/daemon I call the rollcall daemon, that tracks daemon states and dependencies at runtime. 
  • A process/daemon to manage signals, semaphors, and counting monitors. But non-scalar interprocess communication managers definitely should not be handled by pid 1. 
  • A special support process for certain kinds of device managers that have to work closely with hardware.
  • A special support process for maintaining system resource checksums.
Processes I consider ordinary processes to be managed by the general process manager, instead of pid 1, are basically everything else, including
  • Socket, pipe, and other non-scalar interprocess communication managers,
  • Error Logging system managers,
  • Authentication and login managers,
  • Most device manager processes (including the worker processes supported by the special support process I might have the pid 1 process manage),
  • Processes checking and maintaining system resource checksums,
  • Etc.
Definitely, code to deal with SELinux, Access Control Lists, cgroups, and that kind of fluff, should be managed by ordinary processes managed by the general process manager.

Well, this is daydreaming. I am not in a position where I have the time or resources to code it up.

So, to you who tell me to shut up about systemd and make my own Linux distribution, I'd love to. You wanna pay my wages and buy me the hardware so I can?

Sunday, October 12, 2014

Two Posts on the /usr Merge Relative to systemd That Were Filtered on debian-user

The debian-user list is not moderated, but sometimes they shut down threads that get out of hand.

I responded to two posts on a thread on the topic of the /usr merge, but my posts never made it to the list. I'm feeling stubborn, so I'm posting my posts here. But I don't want to mess with copyright issues, so I'll paraphrase what Hans and Jonathan said instead of quoting them.

I include links to their posts in the archives so you can decide if my interpretation of their posts is accurate.

[Now that I have the thing posted here and look back over it, I don't think I've interpreted Jonathan's post correctly, and I think I've dragged my responses out waaaaaay too long. (Leaving this here in case my posts ever make it through whatever tarpit the debian list server sent them to.)

Short version: So what if the recent /usr merge being pushed by the guys who bring us systemd is just another in a long line of attempts at removing a poorly understood organizational division? So what if we have big hard drives? 

The solution is not to abandon efforts to organize things. Rather, we should take a clue from such projects as Gobolinux (They aren't the only ones.) and go in the other direction -- even more separation. 

You should only read the following if you have time to waste.]

================================


(1) systemd and initrd and /usr [Questions by Hans.]

--------------------------------
Warning -- I do not use systemd.

> [Asking how to mount /usr and /usr/local to initrd]

I'll leave that for someone else.

> [The problem involves a mapped, encrypted /usr partition. There is a question as to why systemd requires /usr and /usr/local to be the same directory, flying in the face of traditional unix practices of putting important directories in their own partitions, with the comment that the tradition is as old as SCSI disk drive technology. Concern that systemd is intended to sweep away the traditionalprinciple of partition separation.]

Partially [I mean, about changing the partitioning practices]. These might help:

https://news.ycombinator.com/item?id=3519952
http://wiki.gentoo.org/wiki//usr_move

You can find more by searching the web for things like "/bin /usr/bin merge" or "/usr merge".

(For me, it's a very depressing tale, the lame leading the blind, and all of us being dragged into the ditch, so to speak, but that's supposedly my personal opinion.)

> [And what about /var and /home ?]

They weren't stupid enough to require /home in the same partition.
/var, I'm not sure about. [After checking, yes, /var can also remain separate, with some caveats.] Some of what I read indicates that when they were slapped in the face with their idiocy by the problems of logging when /var couldn't be mounted, they chose to ignore reality. Which is just as well, since they now aren't trying too hard to force everyone into the world of one big partition.

> [Request for explanation of reasons and methods.]

I'll leave that to someone who agrees with new policy.

==========================


(2) systemd and initrd and /usr [Response by Jonathan.]

--------------------------------
My reply to Hans didn't make it to the list. Wonder why. Wonder if my comments on Jonathan's response will make it. [Neither have made it as I post this.]

>> [Question about /usr and /usr/local and primary directories being mounted  separately, see above.]

> [Assertion that something about the portrayal of separation as traditional was wrong.]

I'd say partially wrong. Or partially right concerning some, less right concerning others. At least, I think Hans is talking indirectly about the sizes of hard drives.

Anyway, there were many people involved and they were not all doing the same things at the same time.

> [Assertion that the /usr split was derived from diskspace limitations.]

Or something. People like to organize things for some reason, and sometimes they use incidental divisions as their organizational dividers.

Consider the tendency during the '80s to try to use the accidental segmentation of the 8086 to organize software in a sort of modularization. Of course, if you didn't happen to be in the management sessions where this was promoted as a "feature" of the 8086, and/or the later sessions where the use of long pointers had to be explained, and/or the debugging sessions where coders tried to explain to managers that you basically had to load both parts of the segmented pointer on every use to avoid obscure bugs, you may not be familiar with the intended example.

But I've sure seen it a lot. Come to think of it, when inheriting a desk, I often used to leave the dividers the previous user of the desk left as they were, thinking it might save me the time of setting up my own dividers.

> [Implied suggestion that we have been stuck with the split since then ...]

I wonder why it would seem to have stuck, considering that (as you point out) merges have been attempted so many times.

It hasn't really "stuck", of course, except as one of several default practices.

> [... implied assertion that the purpose of /usr was for holding login user account home directories that there would otherwise not be room enough for, pointing to the name as evidence ...]

Uh, no. There were many different people doing many different things, remember.

The history you are familiar with seems to be one thing some people were doing.

I'm familiar with a different history. Others are familiar with other histories.

However, the point you make further down that the current use is full of historical and functional inconsistencies is well made. That you seem to suggest that the merge is a good thing, is something I'll take exception to.

> [..., assertion that there is no more need ...]

I think the real reasons have not at all vanished, have only become more pronounced.

> [... because disks are big enough now, and login directories are now pretty much moved to /home or /user or /u , and the split occurred way before SASI/SCSI anyway.]

You've lost me a little on your time line. Mixing the time line and (probably unintentionally) cherry-picking examples may be what leads you to the conflation of /usr and /Users.

> [Lamenting that some people don't understand history.]

Sad? Maybe.

Little to no awareness? Sure. I don't know anyone who knows the whole history.

There was lots of history, lots of different shops doing lots of different things.

We didn't have the internet, or even its predecessor, connecting most of the people who were using unix like we did in the 1980s and later. People got their information from the vendor or support group, from conferences (once a year, maybe), from magazines, and from private discussions and correspondence.

> [A reference to Stephen Coffin, in UNIX System V Release 4: The Complete Reference (1987) observing that /bin has become a link to /usr/bin , and /lib a link to /usr/lib in Unix System 5 Release 4, and that the change was part of changing the file system structure in Release 4.]
 
Which was a bit of a controversial release, and the book was also a bit controversial. Not even everyone who used release 4 thought that book was worth the paper it was printed on. Others thought it was wonderful, IIRC.
But such is the substance of history books (as opposed to the substance of history, itself).

> [Lamenting that some people think the idea of merging is of recent invention.]
 
Sad? I don't know. Not unusual, sure. Not exactly optimal, under some definitions of optimal, yeah.

Avoidable confusion? No.

Preventable confusion? No.

Should we all learn more from history? Absolutely.

> [Discussion of history of merges, The 2010 merge in Solaris 11 being pointed to as precedent to the recent /usr merge in the Linux community, but Solaris 11 being preceded by at least 20 years.]

Evidence that people who don't learn from history tend to repeat it, I guess?

> [Assertion that the motivation for the merge is that /usr is no longer on a separate volume, ...]
 
You mean, because we no longer need to keep our volumes small, I think.

> [... or at least will be mounted early enough that programs, libraries, and other resources in /usr will be available in time for the boot process to use them, ...]

Which early mounting is, as we all know, difficult to achieve in practice, because of the differences in vendor/distribution (both hardware and software) implementations.

> [... and then pointing out that the boot process in the initramfs must indeed explicitly mount /usr if it is in a separate partition.]

Because mount order and timing is so hard to enforce across all hardware/software combinations.

> [Declining to further explain, deferring to other's explanations:
 * https://fedoraproject.org/wiki/Features/UsrMove 
 * http://www.freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge/
]

Which is an approach to the problem taken by people with a specific goal, and ignores a lot of issues they consider side-issues. (If they acknowledge the issues at all.)

But I think they (and you?) are focusing on the size limitation motivation for the separation and ignoring the need for structure and organization.

> [Pointing out that the opinion was expressed as early as the early 1980s, that dividing things between the root file system and the /usr file system was a mess resulting from the historical accident ...]

Would it be wrong to say, rather, "... some people were under the impresssion that /usr was a mess caused by historical accident."?

> [Reference to a 1986 Unix system administration book by David Fielder ...]

David Fiedler, for those who want to look him up. (My fingers get crossed when I'm typing fast, too. :-/)

> [... and Bruce Hunter, who gave up on trying to explain the division, calling it a "hodge-podge" "overflow" "grab-bag". Also invoking a computer historian, Rob Landley, recently agreeing with Fiedler and Hunter:
* http://lists.busybox.net/pipermail/busybox/2010-December/074114.html
]
 
The nice thing about experts is you can usually find one to support the point of view you have. :-/

I look and find others that describe the organization as somewhat arbitrary, accidental, and so forth, but stay short of calling it miscellany.

> [Assertion that the /usr merge is not just Lennart Poettering's baby, neither merely a feature of systemd.]

You mean that it is not particularly original with the systemd project or with that project's founder, Lennart Poettering. And you're right about that. But it is one of the things that exists within the broader agenda of the loose group of projects that includes systemd.

> [Pointing out that people are on record as thinking it inconvenient and messy the year Poettering was born, and that failing to fix it is tantamount to failing to learn from history.]

Some people see a mess where others see incomplete structure. Or historical organization, as the case may be.

My timeline is a bit different from yours, by the way. From memory, and not properly consulting my records/sources:

** late 1960s ~ early 1970s:

In early unix systems, some integrators and end-users were separating binaries in /bin and /sbin by whether they were statically linked (and thus could be used early in the boot process) or not. /sbin was mounted earlier than /bin in many cases, but were on the same disk or partition in other cases. Also, if independent drives could be read concurrently, it was convenient to have executables and libraries spread across more than one drive.

It was said (by many) to be more convenient to always separate the binaries so you could mount them separately when you needed to, which, due to disk sizes those days, was not too rare.

[This is not a problem of size, but of organization, although the problem of organization can currently be ignored somewhat if the boot volume is large enough to contain both the root file system, with /bin and /lib , et. al., and /usr and all that is under that.]

(Note that dynamic linking was not available for the earliest unix systems. Also note that "vendor" in this context is most often a department within Bell, rather than Bell, itself.)

(Also note that many of the patterns and practices used in Unix were inherited from extant non-Unix OSses that we don't hear of much anymore, so what I'm describing here is naturally mixing part of unix history with the history of those other OSses.)

/usr was not present on all unix systems during much of this time.

Where it was present, it was, in fact, sometimes used as the home directory for user accounts, when the system integrator or whoever was setting it up wanted to collect all login directories under a single tree.

But it was more often intended as a place to localize all the customizations for a customer ("user", in the broader senses of end-user, most often a department within Bell). (If you're saying, "Wait! Wasn't that /opt or /usr/local?", slow down. You're jumping the gun. That was in a later iteration.)

(Sometimes, people who put login directories directly under /usr were chided for confusing "usr" for "user", in a possibly misguided effort to get them to understand the difference between "end-user" stuff and "user account" stuff.)

IIRC, there were floppy-only systems that had root, including /sbin, on one floppy, and /bin on another. And they might have /usr on a third.

** mid ~ late 1970s:

We actually had people talking about "mature unix systems" already. Heh.

Much of the root tree structure was becoming common, if not exactly standard -- /bin , /sbin , /lib , /include (!!), /etc , and so forth. Vendors had their own quirks, and directory names were not exactly reliable indications of what they contained.

But /usr , as a place for customization, was possibly closer to a standard practice than /sbin for your start-up binaries. Sometime during this time, /include tended to get moved to /usr/include, and parallels of /bin , /sbin and /lib under /usr , for less general, more end-user oriented stuff became fairly common.

Many hard disk manufacturers got their products stabilized, so systems running on floppy became somewhat rare. Disk size was becoming a non-issue, but the organizational structure was still useful.

/sbin , in particular, became viewed more as an "important system binaries" directory, rather than a "statically linked binaries" directory, but it was still common to arrange for /sbin to be on the first volume to be mounted, whether /bin was or not.

** late 1970s ~ mid 1980s:

This was where Shugart's SASI interface became something of a standard, and where IBM and others refined it to SCSI.

But we should note that neither of these interfaces were original in providing the ability to connect multiple drives. We should also note that the size of a system was increasing, so even with "huge" 20M hard drives, it was useful to physically split the file system and the file system, and the organizational divisions provided natural boundaries to split the volumes on. Thus, /usr as a place for integrators to use in customizing an installation became somewhat standard.

The content of /usr/bin , /usr/sbin , and /usr/lib became fairly standardized as well, partly because of the BSD work making once optional utility and application programs easier to get and install.

So now it became desirable to have another place to put the end-user customizations. /usr/local and /opt were among several directories that came to be commonly used for that purpose around this time. /home for the user account directories was not yet as common.

Remember that there were no hard-set rules about the structure of the file system tree. In particular, unix-like OSses (OS-9 is one example that comes quickly to mind) would tend to vary significantly from the mainstream practices. Also, many integrators and end-users would re-organize the tree as they saw fit, generally seeking more meaningful names.

Various efforts at merging the binary and library directories were tried, as well. Putting everything executable in /usr/bin and putting links in /bin and /sbin was just one of many attempts.

** mid 1980s ~ mid-first-decade of 21st century:

/bin , /sbin , /lib , and other parts directly under root (including some of /etc) have been the focus of unix industry standardization efforts, also in Linux.

Stuff under /usr has been the focus of distribution- or vendor-specific standardization, in both unix and Linux. And /home has become something close to a standard place for user-level login account home directories.

And various efforts to re-structure the tree continued to be tried, including merging.

Note that there is a natural organization seen here:

-- core stuff, things that all players in the industry want to rely on;

-- value-added stuff, things that individual vendors want to use to differentiate their offerings;

-- customization stuff, the applications that are the real reason for buying the computer.

Note also that Solaris, the captive OS of Sun and now Oracle does not need the first two levels. They are vendor and integrator. Apple's Mac OS and iOS, likewise. Google's Android, likewise.

** mid 1990s ~ present:

Linux-based systems generally started a step or so behind mainstream unix in standardization efforts and such, but ultimately took the lead in general-purpose systems [in many ways, not all]. The kernel developers borrowed lots of ideas from embedded systems and non-unix unix-like systems, including alternate file system structures.

The BSD derivatives have not [been] as popular as the Linux-based systems, but there is a lot of sharing, so there is a bit of convergence, a lot of similar experiments.

The three pseudo-layers of the file system tree that [I] specified above have continued to be convenient for the various distributions (as virtual vendors) to use, to show their stuff in. They have been traditionally reflected in the division of / , /usr , and /usr/local , or the several variations thereof.

Merging /bin with /usr/bin seems to me to be nothing less that pushing one set of integrations into the implied standard.

Unfortunately, in spite of the borrowings from embedded systems, one desktop-oriented file system layout now gets a lot of attention, and by the middle of the first [second, I meant] decade of the new century, has come to be seen as the one-and-only goal by a small, vocal, and inordinately influential group.

We missed the news. Linux on the desktop was Mac OS X. The fact that MkLinux was dropped for FreeBSD is only incidental. Linux and BSD are twins. We "won" the desktop. But there seem to be lots of engineers who wanted to be leading the charge, and they seem to want a re-match.

Okay, the last two paragraphs are somewhat speculative on my part.

The point I'm trying to make is that the /usr merge seems to me to be motivated by the assumption that the integrations thought to be appropriate for a certain desktop environment are necessarily appropriate and correct for all environments that use the Linux kernel. It's one of several such changes that are being pushed by a group of projects which includes systemd.

And, while I have seen noise about how the merge is more convenient to certain developers who take a certain class of approaches to desktop systems, for my purposes, I'd far rather see the approach taken by the GoboLinux project become the standard, if we have to have a single standard layout.

==========================

Not exactly a satisfactory exposition of the questions. It leaves the question dangling, of whether there is a better organization, and I think there is.

Gobolinux is heading in the right direction, but common practices in Linux application packages interfere with taking the next step. And completely effacing all organization is just going to make those bad programming habits worse.