Print this Section

  
  

Web Systems Models

Systems Models

Historically, the World Wide Web has functioned simply as an "information delivery system." People have used it to gather information about a world of topics for which millions of Web sites have provided a portal. More recently, though, the Web has become something more than an electronic library of information. It has become a communication, information, and transactional platform through which many economic, social, political, educational, and cultural activities are enjoined.

Information Delivery Model

When functioning as an information delivery system, Web development activities are fairly simple and straight-forward. First, information content is entered into a document that will eventually become a Web page. This content is surrounded by special layout and formatting codes written in the HyperText Markup Language (HTML) -- to control its structure and appearance in a Web browser. Then, the document is saved to a Web server computer to await public access. Users access the document by entering its Web address in their browser windows. This address, known as the URL, or Uniform Resource Locator, specifies the site where the page is stored and its directory location on the Web server. The server, in turn, retrieves the page and sends it to the browser, which interprets the HTML code and renders the document on the computer screen.

There are certain consequences of basing Web access on an information delivery model and in following the traditional Web development process. In the first place, the information content of the Web page gets "fixed" or "frozen" into place. It becomes embedded in and closely linked with the HTML formatting codes that surround it. Thus, it becomes difficult to make changes to the content of the page without rewriting and re-editing its presentation formats. So, it becomes troublesome to keep pages up to date, especially if the content changes routinely.

At the same time, Web page authors often need to be familiar with HTML coding. Even when using visual tools such as Expression Web or Dreamweaver, the author may need to tinker with the code to get the page to look as it should. Also, it often becomes necessary for a Web "expert" to work closely with the content provider, offering the technical skills to maintain the pages.

There are also limited ways for the user to interact with traditional Web pages. The user often serves as a passive reader of the content, with the Web server acting as a simple electronic "page turner" for the reader. Thus, Web sites built around the information delivery model risk becoming static, passive repositories of outdated information. Web pages risk becoming historical archives rather than timely, responsive sources of accurate, up-to-the-minute information.

Information Processing Model

To overcome this static, passive use of the Web the need is to begin viewing the Web not just as a simple information delivery system but as a full-featured information processing system. This means that the Web itself and its comprising sites and pages need to be viewed as mechanisms to perform the full complement of input, processing, output, and storage activities required to delivery dynamic, active content -- in short, to provide the basic functionalities of an information processing system.

Figure 1-2. Functions of an information processing system.

In an information processing model the four basic input, processing, output, and storage functions have specific meanings:

  • The input function permits users to interact with the system, requesting processing options, controlling information access, and specifying delivery methods. Plus, the user can become the source of the data which the system processes and which it maintains in its repositories of stored information. The processing function refers to the data manipulation activities and the processing logic needed to carry out the work of the system. The term implies that the system can be "programmed" to perform the arithmetic and logical operations necessary to manipulate input data and to produce output information.
  • The output function delivers the results of processing to the user in a correct, timely, and appropriately formatted fashion.
  • The storage function ensures the currency and integrity of processed information, maintaining it over the long term and permitting it to be added to, changed, or deleted in systematic fashion. In the final analysis, stored information becomes the primary content of Web pages, reflecting the most current and accurate information appearing on those page.

From an information processing perspective the Web itself functions as a giant, expansive computer system, and in fact it is. Information processing activities take place across the separate hardware and software components whether they are located in the same room or scattered half-way around the world.

In adopting an information processing model we can begin applying Web technology to the creation of Web sites that are truly dynamic, interactive, and up to date -- Web sites wherein information content is always up-to-the-minute current, content is personalized to the needs of the user, content automatically changes in response to user requests and when the information itself changes, and the user can interact with the Web page, adding to or changing information in the system. An added benefit is that such sites can be created without the recurring need to rewrite or reformat Web pages. The pages themselves change dynamically to reflect changing information or changing user preferences.

Assigning Functions to Components

Returning to the notion of a three-tier, client/server system, take a look at how the hardware components, software components, and processing functions relate to one another from an information processing perspective. Recall that the hardware components in a three-tier environment consist of a Web client computer (the desktop PC), the Web server computer, and the database server. Each of these three hardware components run corresponding software. Now, we can map the information processing functions across these three hardware/software tiers.

Figure 1-3. Functions of an information processing system mapped to a three-tier client/server system.

All input and output functions fall mainly to the Web client machine. User interface activities such as data entry, data validation, processing control, and output formatting are performed on the Web client.

The main information processing activities fall to the Web server, sometimes called the Web application server. Here is where the arithmetic and logical routines are programmed to carry out the business processing tasks of the system.

Finally, data storage and access functions take place on the database server. This component handles information storage and retrieval functions necessary for business processing to take place, and it permits long-term maintenance of stored information through file and database additions, changes, and deletions.

As you can see in the illustration, system functions are "farmed out" to the various separate components, although they all work together in an integrated systems of activities. The basic idea is that specialty components perform the specialized work for which they are best designed. It is also true that system functions are not "place bound." That is, input, processing, output, and storage activities can take place wherever the components are located. They can be encapsulated within single machine on a single desktop, scattered across two or more machines within a department or company, or widely scattered to the far reaches of the globe. In all of these cases the technologies and techniques used are virtually identical, making it a fairly routine matter to develop Web applications for whatever physical or geographic environment the developer encounters.


TOP | NEXT: Web Applications Development