Friday, January 8, 2010

If a god were almighty...

Most religions put some limitation on the capability of gods, e.g. their "power" and mental faculty. Some gods are human-like or personified (human believers projected their views on the gods, no doubt), and have many faults in their characters and behaviors: A lot like just a super powerful human being.

The Christian, Jewish and Arab, however, say that their god (it has the same root, so it is one and the same) is the only god and it is almighty.

More or less any religion tends to try to assert that its god(s) is (are) superior to ones that others believe. How a god can be superior to other gods? Typically one can claim that his god is more "powerful" than the others. If you follow this logic, of course, the most powerful god can do anything the other gods can do, or it can do it better, or it can do other things the other gods can't do. Then, it must be a logical conclusion that an ultimate god must be almighty. Of course, this does not prove whether the god in question is almighty or not, nonetheless it seems this has been claimed without any good proof.

I think that the people who started this god had a need to claim that their god was almighty to declare the supremacy of the god in some time in the history.

However, I was thinking, if any being were almighty, it does not have to do anything with events or humans at all. It would have no reason to do anything. It did not have to create a world or have to force humans to labour for anything.

I don't go into a detailed argument but it is a fallacy that a god can be almighty at the same time it could have any human-like motivations or it could have any concern with humans or any worldly matters.

Along the same thought, I was thinking of the idea of the parallel worlds and that the current time point is a starting point of infinite possibilities that lead to different time lines. This is probably a fallacy, too. If the infinite possibilities existed in any point of time and space, one of the most "adjacent" time lines will have only 1 "event" different. Since each time line consists of infinite number of "events", the probability of any two time lines that may have only 1 event different is infinitely zero (1 over infinity). Is this a correct math??? Does anyone have any thought? Please mind, this is not a serious math or philosophical conjecture. This is just an idea about a commonly used science fiction ideas and the argument is at that level. As to the reality of the time line, I think that actually only one "time line" will happen and all the other "possibilities" (presumably they existed - which I doubt) become naught as soon as they did not happen.

Another commonly used ideas is the space travel: Warp or Jump or Faster-than-light travel. I think that these are also fallacies. A recent SF idea of the Startgate travel, using the Wormholes, would be ever thinkable seriously at all? Anyone? Personally, I think that human bodies will be disintegrated even if it is ever possible :-)

This is, however, not to say some old SF ideas, e.g. airplanes, rockets, TV, cell phones, computers, are reality now. I think, though, there is a line between what may become possible and what would not ever possible.

Tuesday, January 5, 2010

Idotic use of current calendar system

The current Western calendar system we use is a mess, and I think that the international adoption of its use was one of the worst decisions of human history.

For international use, we should have adopted something which makes much more sense and logically useful. Personally I think that we should have kept Chinese or Iranian lunar calendar, that is more logical and astronomically better designed, which also noted the precise days of the solar cycle events by any rate, for agricultural purposes.

The problem was, of course, the lunar events (moon day and month) and the solar events (solar day and year) did not match in their timings, e.g. "start" and "end".

The current Western calendar uses Month, which was based on the phases of the moon. Since the calendar is not based on the Moon phases anymore, the use of the word Month should have been abandoned.

Some of the month names are based on the Roman counting system. September (7), October (8), November (9) and December (10). Since they are not meaningful anymore, we should have abandoned such practice that is meaningless in modern days.

Since the time of the day is a solar event, the noon is meant to be the time when the sun is at its highest point in its orbit in the sky, ignoring the current observation of regional "standardized" time. However, the moon can't be always precisely full at midnight of the full moon day.

Also, the weeks are based on neither solar or lunar events. For international use, I think that they should not have been used at all. Old lunar calendars used either lunar (the first day of the month, first quarter, second quarter full moon, etc. and the last day of the month) or decimal (10th day of the month, 20th day of the month) to mark holidays etc. Holidays based on the 10 day cycles make more sense than weekly cycle, except that the full moon nights seem more adequate for some nocturnal celebrations.

The lunar calendars noted solar events as, e.g. equinoxes, solstices, seasons. They do not fall on fixed dates on the lunar calendar every year, but that is hardly a problem, since the exact dates can be calculated precisely in each year. From the way the old Roman calendars were made, it does not sound that early Romans did not possess precise astronomical knowledge to make this work well.

Instead of having the shortest month (February) and the leap year, etc., the lunar calendar had to have the 13th months sometimes, but I think that it is still better to have the 15th days of the month coincide with full moons.

Also, apparently Christians argue whether their "holy" day should be Sunday or Saturday. This argument is ridiculous. Which day is the 7th day is totally man-made affair. Neither day has any divine or astronomical significance, since the calendar system we use is not sanctioned by their god by any rate. This argument is one of the idiocies of Christianity.

Monday, November 30, 2009

Just reflecting on Web technology

In 1980, computers still meant large main frames and hobby computers. There wasn't much consumer computing outside large companies or universities. Emails, Gopher, FTP and Telnet were main software we dealt with in those days. Network meant often modem connections to particular host sites, as Internet was not yet accessible for masses at that time.

As I recall, this landscape changed dramatically once WWW became available. This caused Internet to become available to more population in universities and eventually to the public. Probably in a large part telecommunications companies played a large role for this technology shift, because they saw a new profitable market in Internet.

Now our typical computing resources are typically on some hosts computers on the network. We communicate with them from our Desktop or Laptop computers. If it is a client-server model, in old days, we could use e.g. generic "terminal" software (e.g. serial terminal emulator, X-Server, etc.) or specialized client software (e.g. GenBank clients, Database clients, that are specific communication client software) on local user machines. These days, the use of Web client is a common place. Alternatively, we could use P2P model, if it does not require central host machines to provide storage for data and logic.

I started wondering how I can classify the systems and communication mechanisms.

Although generic "terminal" software paradigm is different from using custom-built client software, I guess that it can be still classified as client-server design pattern. There are hosts and clients, and the end users are on local clients to communicate with central hosts. The clients (GUI) are either provided by the servers or had to be installed on the local machines a priori.

Web browsers are along the line of the "generic" (or flexible?) client principle. It is like an extension of the old generic terminal software paradigm, which gets GUI from the server. However, different from something like X-application that presents more rigid GUIs on the "terminals", it uses the contents as GUI. In this case, the organization of the static or dynamic contents acts as GUI as well as information on which the end users must act. As the history proved, the explosion of Web deems this to be the big success factor. Obviously, providing information readily (than engaging in time-consuming activities to create customised client applications) was more beneficial and important.

I, however, always wondered about the validity of the use of Web Browser technology, as it became increasingly popular as though it were only a valid technological choice in distributed computing. Despite the advantages of the Web technology, traditional server-client seems more concrete in terms of functionality.

The advantage of using Web Browsers may be:
1) Various Web applications will be available to all platforms simply where Web browsers are installed.
2) There is no need to install design, create, and install specialized client software on individual machines, as long as the Web browsers are pre-installed on those machines.
3) Web applications may be delivered more readily than traditional server-client development cycle. Particularly if they are made of only static contents.

On the other hand, the weakness was said to be:
1) UI is more suited for presentation-oriented Web application.
2) It may not necessary be suited for data-centric applications.
3) It is not suited to build a "rich client" that behaves more like a real Desktop application.

Today, when it is needed, "rich" clients are typically built using Flash plugin. The Flash plugin hijacks a part of Web Browser page and provides its own UI capabilities and scriptability (ActionScript), whereas the Web Browsers are restricted to use HTML, DHTML, Java Applet and JavaScript. (Except that IE allows proprietary ActiveX).

Java Applet was a similar technology, but it did not take off as Flash. In my opinion, mainly because it was too unstable between different versions of Java used and on different browsers that implemented it. Also its GUI (AWT and Swing UI library) was not as easy to program as Flash.

DHTML had greate promises but it also was an unstable and unreliable technology. Perhaps many browsers did not implement it too well. Or, although DHTML was an "doable" engineering, probably it was inadequate technology. If this is a case, I hope that one day someone can prove this theoretically as an important lesson.

I think that the moral of the story is 1) what works wins and 2) technology that is easy to use wins.

If a Web application does not use HTML but it is made of Flash contents only, popularity of Flash seems to indicate that we could replace web browsers with Flash standalone application.

The success of Web seems to indicate, flexible client hosting application is preferred over custom-made specific client applications.

Flash code sent from the server describes to the client-side what GUI elements it uses and what actions they may perform. Although it is a proprietary code i.e. only understood by Flash client, in principle this is a remote UI definition language.

There are other types of remote GUI definition protocols or languages, such as, X-Window protocol and its Window Manager variants, VNC, OpenXUP, XUI, JSF. From these, I think that we could define a platform neutral model or meta-language for GUI. Such model or language may be useful for higher level design of GUIs and automated generation of GUI codes that are executable.

On the other hand, I have some reservation in whether such high level GUI definition language may be useful or beneficial for designing special purpose clients. It proved to be the case for JSF, since it was often found too generic or lacking some specific widgets. This may be also evidenced by the fact that many variants of X Window Managers and similar protocols had to be created in the past.

Recent trend in programmable "intelligent" mobile devices is the opposite. It likes to have least network traffic possible. Thus, it favours client software to be pre-installed on mobile devices with pre-defined GUI. It can show dynamic contents, but the base GUI "screens" are defined by the installed code. The network traffic is meant to be only the contents (data), not the GUI definitions or code. Much effort was spent to make it easy to discover and install client software.