In Economics, we speak of the principle of decreasing returns. At some point, putting more resources into something no longer produces as much profit. As the thing becomes more complex, it gets more difficult to move forward without making major mistakes.
A part of my assertion about the problems of rolling release is this concept applied to software. Is it possible someone can produce software which satisfies most needs, is almost totally bug free, and simply needs no improvement? I realize the question is hard to separate from the issue of user comfort. Yes, some people are still in love with WordPerfect 5 for DOS, and still pay money to get a copy. It’s impossible to say it’s superior to later versions, nor simply whether it is the principle of decreasing returns for that individual. It’s the subjective sense they can’t be more productive, and don’t want to invest more time in learning something newer. It seems nobody listens to them. We find marketing drones are entirely too concerned with some indefinable image consciousness, and seldom in step with actual consumers, so it can be profitable to keep an old product alive as long as somebody is buying it.
Take away the idiots who only think they know what’s going on, and you find most software packages pass through a cycle when they are a brilliant concept not yet fully implemented, through a series of very popular releases, into irrelevance because the most recent managers lost the vision. Could the product but have been managed, they would have known when the stop and simply support a good thing until it no longer sold. Yes, we have to contend with the tendency of corporate buyers to believe any marketing baloney that simply strikes some unreasoning chord, but there remains a certain core of buyers who feel cheated when you stop supporting something they truly value. This explains why the likes of Enable O/A still has a user base who struggle to keep it running on differing implementations of the necessary underlying DOS environment. That ancient office suite was nearly the pinnacle of what folks actually need to get business done. Simply adding a GUI would have kept it ahead of the market, but some corporate droid decided to sell it to MS, who buried it so they could promote a clearly inferior office suite.
We already have a vast supply for the people who can’t wait to buy the next new hardware gimmick, and the software to exploit it. What so few in the Open Source community realize is that sort of user is the minority. Sure, worldwide they number in the millions, but the world’s population is measured in billions. By now, the majority of that population has encountered a computer. There might be a period of fascination, but once they have assimilated the thing into their lives, it is for most of them relegated to the status of a tool, not a god. So keeping development going to make use of newer hardware technologies is almost a necessity, since most computer hardware tends to die all too soon, and the newer stuff is so very different. However, there is a growing number of computer users who just won’t notice much, and aren’t likely to care about even that little. Keep the hobby alive, because something truly essential will inevitably arise from all the feverish activity.
For the vast majority of computer users, though, we’ve already passed the point of decreasing returns. Feature saturation has set in, and it’s time to consider keeping alive a branch of some projects which have already produced something good enough for most of the world. When you have a winner, stick with it. Fix the bugs, make it work better. Tighten the code, remove unnecessary layers of abstraction. This goes for underlying libraries, as well. I can’t recall the article I read long ago, but someone was able to explain in terms I understood: The main cause of bloat was not always new features in the userland software, but in the libraries used to build it. Is there a release point where we have good enough libraries, too? Keep that product alive, independently if necessary.
If computers and the software don’t serve the needs of humans, the industry needs to die. If we can’t develop without proposing to define for them what they need, as if we were somehow a superior class who justly dictate what users will do, we have no moral ground to stand on.
The basic purpose of a word processor is to format text for printing. If you aren’t going to put it on paper, you really have no need for a word processor. However, I find a huge portion of the computer using population don’t make a distinction between documents and webpages. That is, not consciously. They know instinctively if they want to print the contents of a webpage the way they want it to print, they’ll have to copy from the page, then paste into a word processor, format, then print. They focus on the presentation, and the information is a separate issue. Indeed, the former often takes precedence.
The problem here is such users seldom have much more than a semi-conscious awareness of their own preferences, even when they focus on presentation. They have no idea what makes presentation effective to their audience; they simply assume people are impacted as they were at that particular moment. Tomorrow the same printed page becomes trash because it doesn’t speak the same. No one seems to notice. People who have studied the issue of how the average brain processes information are more likely to reduce all the trashy, flashy extras in presentation, and stick with the essentials. They know all caps, all bold, all italics, all underlined, all brightly colored means the average brain says the message must be unimportant, because it requires such a hideous paint job to convey the importance the sender attaches to it. And we all know most people are more impressed with themselves than most other people are. Save the theatrics for the live presentations; in print there are precious few with a talent for it.
Unload all the silliness, and you arrive at the information itself. Adding attributes to plain text has a well-established meaning. The framework was established long ago, and various attempts were made to produce software which approximates that framework. Some details have shifted around a bit once standards were established for text displayed on a computer screen. What we used to do in print won’t exactly work on computers, and for most uses, print has shifted to match what we do on computers. For example, it’s no longer acceptable to indent the first line to mark the beginning of a paragraph. Now we use vertical space between paragraphs, and leave lines unindented in the main text. We still set off large blocks of quoted material by indenting the whole paragraph, so there is some overlap. Whole books have been written on the details. The point has always been: What’s the best way to inform the widest audience?
In the end, we have web publishing standards as reflected in the system called HTML — HyperText Markup Language. That framework continues to develop, but it’s now the reflexive standard in the minds of most people who can read. Over the years, I have found myself moving away from the word processing model altogether. While I still have a printer attached to my computer, I seldom use word processing software. For things which must be printed, I prefer a typesetter (Lyx), but for most things I prefer either plain text or HTML. All the more so since, even for printing, HTML processed by your browser seems a better way to go.
That is, it’s possible to write formatting instructions (CSS — Cascading Style Sheets) for inclusion in the webpage display which can be tailored just for printing. Even then, the whole idea is to avoid making choices for the reader, and giving them maximum freedom to alter the presentation quite a bit. Of course, that means they’ll need some awareness of what they can change and how, but for those who don’t know, the basic standards are probably fine. I use minimum formatting, with one CSS for browser display, and another for printing. This lends itself to a certain global accessibility, since neither directs the result to fit a particular size of paper. My printer CSS allows the user to print on any paper, because it’s solely an issue of browser and printer controls. All I do is provide relative guidelines, and those who really must change those can do so by telling their browser to prefer their own CSS over mine. I construct my CSS files specifically to be advisory, and easily overridden by anyone who has the savvy to set their own user-CSS.
On the one hand, this forces me to focus on the content. It’s pretty much the same for typesetting software. In my best academic style, I try to limit the use of italics, boldface, and reserve underline for marking hyperlinks. If you really need white-on-black, you can tell your browser to make it so, and it should work fine. If you really like blinking text, you are free to learn how to make it happen and apply it to my webpages. You can print any color you wish, because I only suggest black-on-white as the most common style. Depending on how your browser and printer cooperate, printing my webpages should work about the same as with a word processor.
I used to have a huge cache of word processor documents, and always worried about the format and whether anyone else could open them and print them. Over the past few years, I have changed them all to HTML. It’s become the most widely acceptable format for just about any purpose.