Telecommunications - a Q&A topical approach - Part III:
Network Technologies, Services, and Planning Strategies

Task 1

1. Define internetworking. Highlight the function of the following internetworking devices:

a. Bridges

b. Routers

c. Gateways

Answer:

a. Bridge is a internetworking device that provides a communication pathway between two or more network segments or subnetworks. A network segment or subnetwork has the same network address and the same type of networking technology. For example, a server provides bridging between two network adapters. The bridge provides a way for a station on one network to broadcast messages to stations on the other network. It is therefore a two-port (or more) device that joins network segments. On the other hand, a bridge can be used to split a busy network into two segments, thus reducing the amount of traffic on each and improving performance. Bridges can filter broadcasts on one network from reaching another only essential internetworking traffic to cross the bridge. Other internetworking devices are repeaters, routers, and gateways.

Bridges are installed for the following reasons:

i To expend the distance or number of nodes for the entire network.

ii. To reduce traffic bottlenecks caused by an excessive number of attacked nodes.

iii. To link unlike networks such as Ethernet and token ring and forward packets between them, assuming they run the same network protocol.

A bridge is a stand-alone device or is created in servers by installing one or more network interface cards, assuming the server operating system supports bridging. Each local area network (LAN) segment connected by a bridge has a distinct network number. As an analogy, the network number is like a street name, and workstation numbers are like house numbers. AS bridge forwards packets between attached network segments. Novell NetWare, Banyan VINES, and Microsoft networks provide server bridging. External bridging is required if bridging functions bog down a server. External bridges are manufactured by Cisco, 3COM, Cabletron, and many others vendors.

Bridges provide filtering functions by reading the address in the Ethernet or token ring frame to determine which LAN segment data packets belong to. However, bridges don't have access to Network layer protocol information so they can't provide best-path routing. Routers can be programmed (or will learn) to route packets over specific paths to reduce costs or avoid traffic congestion and multiprotocol routers can be used to handle network traffic that consists of multiple communications protocols.

As networks grow, the number of bridged connections grows, opening up the possibility that loops or inefficient paths will appear. Bridges also lack congestion management besides many workstations need to broadcast. In a bridged network, flow control is relegated top the end system. Bridges may actually add to the congestion problems by transmitting excess packets in an attempt to recover congestion problems.

There are generally two types of bridges: local and remote. A local bridge provides connection points for LANs, and is used to interconnect LAN segments within the same building or area. Remote bridges have ports for analog or digital telecommunication links to connect networks at other locations. Connections between remote bridges are made over analog lines using modems, or over digital leased lines like T1 that provides 1.544 Mbits/sec throughput.

b. Routers, as above mentioned, are the other device type for internetworking. Routers are packet switches (or Network layer relays) that operate in the Network layer of the Open Systems Interconnection (OSI) protocol model. Routers interconnect networks over local or wide areas and provide traffic control and filtering functions when more than one pathway exists between two end-points on the network. Routers are critical to large internetworks and wide area networks that use telecommunication links. They direct packets along the most efficient or economical path in mesh networks that consist of redundant paths to a destination.

A router examines address information in packets and sends the packet along a predetermined path to its destination. Routers maintain tables of adjacent routers and local area networks (LANs) on the network. When a router receives a packet, it looks at these tables to see if it can send the packet directly to the destination. If not, it determines the location of a router that can forward the packet to its destination. The forwarding process does require some processing. The router must fully receive a packet, view address information and then forward it. Therefore, network operating systems such as Novell NetWare support routing in the server. This is accomplished by installing two or more network interface cards. However, routing tasks can slow down a server. If so, external routers are necessary to free the server for file-related tasks only.

Routers handle either a single protocol such as TCP/IP, or multiple protocols, such as SPX/IPX and TCP/IP. However, unroutable protocols can be carried across internetworks using encapsulation techniques. Routers allow a network to be segmented into separately addressable networks. The segments are easier to manage. Each LAN segment has its specific LAN number, and each workstation on that segment has its own address. This is the information placed in packets by the Network layer protocols.

Routers handle packets that have the same network address. When a router receives a packet, it begins a procedure that unpacks the packet and determines where the packet should be sent. The packet procedures a router follows are:

i. The packet is error checked using the checksum value in the packet.

ii. The information added by the Physical and Data-Link level protocols of the sending device are stripped off.

iii. The information added by the Network protocols in the source computer is evaluated.

The Network layer protocol information contains the destination address, and, in case of source routing networks like TCP/IP, a list of hops that define a predetermined "best path" through the network. A network is usually built with fault tolerance in mind. Several paths are created among routers to provide a backup path in case a link fails. Some of these paths may use high-speed networks such as the Fiber Distributed Data Interface (FDDI) in the campus or metropolitan area or direct digital lines (T1) for wide area networks. Routers can send data over the best of these paths, depending on which is the least costly to use, the fastest, the most direct, or the one specified by an administrator.

c. A gateway is a computer system or other device that acts as a translator between two systems that do not use the same communication protocols, data formatting structures, languages, and/or architecture. A gateway is unlike a bridge, which simply passes information between two systems without conversion. A gateway repackages information or changes its syntax to match the destination system.

Note that most gateways operate at the Application layer of the Open Systems Interconnection (OSI) protocol model, which is the top-most layer. Also note that because gateways perform protocol conversions, their performance is not spectacular.

An IBM host gateway connects local area network (LAN) workstations into systems that in the past did not recognize intelligent computers attached to LANs. With a gateway, workstations on the LAN appear as 3270 terminals to the IBM host. A PC's keyboard is mapped to the 3270 keyboard format. However, it's usually possible to switch between the 3270 session and a normal stand-alone computer session by pressing ALT-ESC or another appropriate key sequence. More sophisticated gateway functions allow PCs connected to the gateway to transfer files to and from the host, or to run client-server applications that let PCs access back-end database services on the host system. IBM's Advanced Peer-to-Peer Networking (APPN) provides peer-to-peer networking services in the IBM environment, so gateways are becoming less of an issues. In other words, the IBM host simply becomes part of the network.

Gateways into DEC systems are used for many of the same reasons that IBM host gateways are used. Digital PATHWORKS products provide access for a variety of personal computers into DEC’s VMS (Virtual Memory Systems) systems.

A LAN gateways provides a pathway for data to flow from one LAN to another with an intermediate LAN serving as the interconnection method. This intermediate LAN typically uses a different protocol, so data is converted for transport over it. For example, many routers provide both Ethernet and FDDI connections. Packets moving from the Ethernet LAN to the FDDI LAN can be either translated (a gateway function) and delivered to a node on the FDDI LAN, or they can be routed to another Ethernet LAN attached to the FDDI LAN. This last option is a form of encapsulation, and the FDDI network serves as a backbone for the Ethernet LANs. There are also protocol gateways such as AppleTalk-to-TCP/IP, IPX-to-TCP/IP, and others.

Electronic mail gateways translate messages from one vendor's messaging application to another's so that users with different E-mail applications can share messages over a network. A typical E-mail gateway converts messages to the X.400 format fro electronic mail messaging. X.400 is a common denominator among many E-mail systems. Most E-mail systems are able to covert their messages to X.400 and interpret X.400 messages, so the X.400 system can serve as an E-mail switching system.

2. Explain the need for a communications architecture.

Answer:

Network architecture defines how computer equipment and other devices are linked together to form a communication system that allows users to share information and resources. There are proprietary network architectures such as IBM's Systems Network Architecture (SDNA) and DEC's Digital Network Architecture (DNA) and there are open architectures like Open Systems Interconnection (OSI) model defined by the ISO. Network architectures are defined in layers. If the standard is open, it provides a way for vendors to design software and hardware that is interoperable with other vendor's products. However, the OSI model has remained a model, rather that a fully accepted international standard. Due to the wide variety of existing de facto standards, most vendors have simply decided to support the many different protocols that are used in the industry rather than conform to one standard.

Layering specified different functions and services at levels in a "protocol stack." The protocols define how communication takes place, such as the flow of data between systems, error detection and correction, the formatting of data, the packaging of data, and other features.

Communication is the primary objective of any network architecture. In the past, a vendor was more concerned with making its own products could communicate, and while it might have opened the architecture up so other vendors could produce compatible products, making them compatible was often difficult. At any rate, protocols specify a set of rules and procedures that define how communication takes place at different levels of operation. The layers define physical connections, such as the cable type, access method, and topology, and how data is sent over the network. Further up are protocols that establish connections and maintain communication sessions between systems, and still further up are protocols access lower-level network communication functions and interoperate with applications in other systems attached to the network.

As mentioned, the OSI model has become the model to which all other network architectures and protocols are compared. The purpose of the OSI model is to coordinate communication standards between vendors. While many vendors have continued to pursue their own standards, some, such as DEC and IBM have integrated OSI into their networking strategies along with Internet standards such as TCP/IP.

Interoperability is a major concern as LANs are connected into enterprise-wide networks. Various techniques are used to accomplish this, including the use of multiple protocols in a single system or techniques that hide protocols with a layer of "middleware." Middleware can also provide an interface that lets different applications on different platforms exchange information. Using these techniques, users can access a variety of multivendor products from their desktop applications.

3. Discuss the key attributes of the OSI Reference Model and its seven constituent layers. Why would you use OSI supported products instead of products based on a proprietary architecture in your setting?

Answer:

ISO (International Organization for Standardization) has the goal of promoting and developing standards for international exchange. ISO is responsible for the development and maintenance of the Open Systems Interconnection (OSI) reference model, which is described below. The OSI standards promote open networking environments that let multivendor computer systems communicate with one another using protocols that have been accepted internationally by the ISO members. However, the work of the ISO is much broader than the communication and networking standards described in the OSI model. OSI is involved in the international standardization of just about every service or manufactured product.

The OSI reference model defines communication protocols in seven layers. Each layer has well-defined functions, and these functions interrelate to the functions in adjoining layers. The lowest layers define the physical media, connectors, and components that provide network communication, while the highest layers define how applications access communication services. The OSI model was derived from IBM’s proprietary Systems Network Architecture SNA), which is an architectural description of the protocols, formats, and structures required to transmit packets of information in a networked environment.

Vendors use the OSI layers to design products that interoperate with other vendors' products; however, designing to the OSI model does not guarantee interoperability because there are variations in the standards. To resolve some of these problems, governments have issued OSI specifications called Government Open Systems Interconnection Profiles (GOSIPs). These profiles specify the level of OSI compatibility that hardware and software products must have. Vendors doing business with the government must provide products that comply with the profiles. In the United States, the National Institute of Standards (NIST) is responsible for GOSIP and issuing yearly updated procurement standards, which are called FIPS 146.

The OSI model is built on the concept of a layered architecture. Layered architectures provide interoperability among multivendor systems. Without open, layered, and standardized protocols, buyers would need to purchase equipment from one vendor.

Layering specifies different functions and services at levels in a "protocol stack." Functions and standards defined for each layer are discussed below. Note that each communicating device has hardware and software that is designed around the stack. Keep in mind that the stack merely defines how to create hardware and software components that operate at each level of the stack. So if you wanted to create a network interface card that would interoperate with other vendors' cards, you would comply with protocols defined in the lower layers of the stack. Layers above the Physical layer specify the creation of software procedures, formats, and other aspects of communication. The higher you climb in the stack, the more sophisticated are the procedures.

The boundary between each layer is called the interface, and the layers are connected by service access points. If a process running in an upper layer requires service from a process running in a lower layer, it passes requests through a service access point associated with the application. Communication between two systems takes place by initiating requests down through the protocol stack on one system and transferring the request at the Physical layer to the other system. The other system passes the request up through its protocol stack and responds in like manner. Each layer provides information or service that prepares the message for transport to the other system. Examples include error checking, dividing and packaging the information, and keeping track of the session to make sure it stays alive long enough to get the message across. A summary of each layer follows.

APPLICATION LAYER. This layer defines how applications interface with the underlying communication system. Some of the included standards are: ISO 8649/8650/10035 (Association Control Service Protocol), ISO 8571 (File Transfer, Access, and Management), ISO 8831/8832 (Job Transfer and Manipulation), ISO 9040/9041 (Virtual Terminal Service), etc.

PRESENTATION LAYER. This layer provides translation functions for data formats and representation. Standards for this layer include: ISO 8822/8823 (Presentation Services) and ISNO 8824 (Abstract Syntax Notation One).

SESSION LAYER. This layer allows dialog between stations in a connection-oriented session. Standards for this layer include: ISA 8326 (Session Service and Definition), ISO 8327 (Session Service Protocol).

TRANSPORT LAYER. This layer provides a communication channel in which end-system can acknowledge receipt of data or request retransmission separate from similar functions handled by the network itself. This layer includes the OSI Transport Class 0, Class 1, and Class 4, which are similar to the Internet's TCP and Novell's SPX. Standards for this layer include: ISO 8072/8073 (Transport Service Definition, Connection-Oriented Transport Protocol, Connectionless Mode Transport).

NETWORK LAYER. This layer sets up, monitors, and takes down communication sessions. It also provides routing functions. This layer supports OSI's Connectionless Network Service and Connection-Oriented Network Service, which provide services similar to the Internet Protocol (IP) and Novell's IPX. Standards for this layer include: ISO X.25 (Packet Level Protocol), ISO 8348 (Network Service), ISO 8880 (protocols to Provide Network Service), ISO 9542 (Connectionless End System-to-Intermediate System (ES-IS)), ISO 10030 (Connection-Mode ES-IS).

DATA-LINK LAYER. This layer frames data fro bit-streamed transmission in the Physical layer and ensures reliable transmission between stations. This layer typically includes standards such as Ethernet Carrier Sense Multiple Access with Collision Detection, Token Bus, and Token Ring. It also includes FDDI. There are two sublayers: Media Access Control (MAC) and Logical Link Control (LLC). Standards for this layer include: ISO 4335 (High-level Data Link Control), ISO 8802 (Local Area Networks).

PHYSICAL LAYER. This layer defines hardware standards such as connectors and the structure of the bit-stream that flows between devices. Standards for this layer are too numerous to list.

4. Briefly describe the distinguishing capabilities of the TCP/IP protocol suite.

Answer:

The development goals for the TCP/IP protocol suite were to allow communication among a variety of independent, multivendor systems. In 1983, TCP/IP protocols became the official transport mechanism for the DoD Internet, which evolved into a system of interconnected networks spanning the globe. It has strong internetworking capabilities and is undergoing a surge of popularity primarily because its development is open and supported by the U.S. government. The protocols are well tested and documented.

The original TCP protocol was developed as a way to interconnect networks using many different types of transmission methods. To accommodate these media, the concept of a gateway (later called a router) was created in which packets from one network were encapsulated into a package that contained the address of another gateway. The packet might be repackaged and addressed to several gateways before reaching its final destination. This encapsulation method was used for several reasons, but the most important was that the designers did not want the owners of the various networks to alter their intranetworking schemes to accommodate internetworking. It was assumed that every network would implement its own communication techniques.

The TCP protocol sets up a two-way (duplex) connection between two systems using the sockets interface. A socket is one end of a communication that specifies the address of the computer and a "port" within that computer that a running application is using to communicate. You might think of this arrangement as you would of a telephone within a building, the building has an address and the telephone number is like a port within that building that connects you with a specific person. Likewise, a socket is a connection to an application or process running within a computer.

TCP communication sessions are connection-oriented and have the following features:

i. Flow control provides a way for two systems to actively cooperate in the transmission of packets to prevent overflows and lost packets.

ii. Acknowledgment of packet receipt lets the sender know the recipient has received packets.

iii. End-to-end sequencing ensures that packets arrive in order so the destination doesn't need to organize them.

iv. A checksumming feature is used to ensure the integrity of packets.

v. Retransmission of corrupt or lost packets can be handled in a timely and efficient manner.

Connection-oriented sessions require a setup phase, a take-down phase, and a lot of monitoring and possibly more excess traffic from overhead than is necessary for some data transmissions.

IP (Internet Protocol) is a connectionless communication protocol that by itself provides a datagram service. Datagrams are self-contained packets of information that are forwarded by routers based on their address and the routing table information contained in the routers. Datagrams can be addressed to single nodes or multiple nodes. There is no flow control, acknowledgment or receipt, error checking, and sequencing. Datagrams may traverse different paths to the destination and thus arrive out of sequence. The receiving station is responsible for resequencing and determining if packets are lost. IP handles congestion by simply discarding packets. Resequencing and error handling are taken care of by upper layer protocols, not by IP. Thus, IP is fast and efficient and well suited to modern networks and telecommunication systems that already provide relatively reliable service.

IP works on a number of local and wide area networks. When IP runs in the LAN environment on an Ethernet networks, for example, the data field in the Ethernet frame holds the IP packet and a specific field in the frame indicates that IP information is addressing enclosed. IP uses an addressing scheme that works independently of the network hardware address. IP does not use this address, nut instead uses an assigned address for each node.

Every node on a TCP/IP network requires a 4-byte (32-bit) numeric address that identifies both a network and a local host or node on the network. This address is written as four numbers separated by dots, for example, 191.31.140.115. In most cases, the network administrator sets up these addresses when installing new workstations; however, in some cases it is possible for a workstation to query a server for a dynamically assigned address when it boots up. The assignment of addresses is arbitrary within a company or organization, but if the company plans to connect with the Internet anytime in the near future, it is necessary to obtain registered addresses from the Defense Data Network. There are three classes of Internet addresses:

i. Class A - supports 16 million hosts (attached computers), but only 127 assigned network numbers exist;

ii. Class B - supports 65,000 hosts and 16,000 networks;

iii. Class C - supports 254 hosts and 2 million network numbers.

Because the Internet address is a combination of host and network numbers, multiple hosts can share the host portion of the number but each host has its own unique number. For example, in class C numbers, the first set of digits is the host number and the last three sets of digits are the network number.

Task 2

1. Describe major trends that are driving the development of high speed ATM-based networks.

Answer:

ATM (Asynchronous Transfer Mode) is a data transmission technology that has the potential to revolutionize the way computer networks are built. Viable for both local and wide area networks, this technology provides high-speed data transmission rates and supports many types of traffic including voice, data facsimile, real-time video, CD-quality audio, and imaging. The carriers such as AT&T and US Sprint are already deploying ATM over a wide area and offering multimegabit data transmission services to customers. Over the 1994 to 1995 time frame, ATM products will emerge from almost every hardware vendor to provide:

• ATM routers and ATM switches that connect to carrier ATM services for building enterprise-wide global networks.

• ATM devices for building internal private backbone networks that interconnect all the local area networks (LANs) within organizations.

• ATM adapters and workgroup switches for bringing high-speed ATM connections to desktop computers that run emerging multimedia applications.

ATM takes advantages of high data throughput rates possible on fiber-optic cables. In the carrier systems, high-speed ATM implementations (155 Mbits/sec to 622 Mbits/sec) use the Synchronous Optical Network (SONET), which is implemented on optical cable and provides a common global telecommunication standard. While fiber networks implementing ATM are built for the public telecommunication systems, ATM is also considered on appropriate technology for private internal switching networks that reach all the way to the desktop. As ATM becomes more established and competition for customers increases, it is probable that 155 Mbits/sec ATM boards will be common in desktop multimedia computers by the middle of the decade. With the number of vendors getting into ATM, competition will surely be fierce.

Current LAN technology does not provide enough bandwidth for the enterprise-wide use of emerging applications such as multimedia and real-time video. The latter requires data transmission capabilities in which a certain amount of bandwidth must be guaranteed to prevent dropouts that appears as jittery images. Shared LAN media like Ethernet can quickly become saturated with traffic loads that prevent real-time applications, because of its high bandwidth, its ability to dedicate a certain bandwidth to an application, and its fixed-size packets (called cells).

ATM has the potential to become the standard data transmission method that replaces most of today's voice and communications devices with ATM switching devices. It is interesting to note that during early standardizations, many assumed ATM would not be widely implemented until the next century. However, the need for high-bandwidth services in the carrier networks and in LAN environments has driven vendors to produce products well ahead of schedule.

2. Discuss a key advantage of ATM over STM (Synchronous Transfer Mode)

Answer:

In a Synchronous Transmission information is transferred in blocks (frames) of bits that are synchronized with clock signals. Special characters are used to begin the synchronization and periodically check its accuracy.

In a Asynchronous Transmission information is sent one character at a time as a set of bits. Each character is separated by a "start-of-character" bit and "stop" bit. A parity bit is used for error detection and correction.

ATM is a broadband technology for transmitting voice, video, and data over LANs or WANs. It is a cell relay technology, implying that the data packets have a fixed sized. You can think of a cell as a sort of vehicle that transports blocks of data from one device to another across an ATM switching device. All the cells are the same size, unlike Frame Relay and LAN systems in which packets can vary in size. Using same-size cells provides a way of predicting and guaranteeing bandwidth for applications that need it. Variable-length packets can cause traffic delays at switches, in the same way that cars must wait for long trucks to make turns at busy intersections.

The switching device is the important component in ATM. It can serve as a hub within an organization that quickly relays packets from one node to another, or it can serve as a wide area communication device, transmitting ATM cells between remote LANs at high speeds. Conventional LANs like Ethernet, Fiber Distributed Data Interface (FDDI), and token ring use shared media in which only one node can transmit at any one time. ATM on the other hand, provides any-to-any connections and nodes can transmit simultaneously. Information from many nodes is multiplexed as a stream of cells. In this system, the ATM switch may be owned by a public service provider or be part of an organization's internal network.

ATM is a cell relay service. A cell is a fixed sized container of information that is relayed between switches in the network at the Data-Link layer. The ATM network does not provide error detection services except to throw out corrupted packets. Like Frame Relay, it relies on the end nodes to detect problems and request retransmissions. ATM is purely a data transmission service and end-nodes do not rely on it for any error correction services. Because of its fixed-cell size, and the elimination of error detection, ATM can operate in the gigabit-per-second range. Fixed size cells are easily handled at switching nodes, unlike variable length frames or packets, which can cause unpredictable delays. ATM is ideal for real-time applications like video-conferencing and voice transmission, as well as data transmission. ATM is currently being implemented by the carriers and is making its way into enterprise networks.

3. Briefly highlight the major features of FDDI and FDDI-II.

Answer:

FDDI is a fiber-optic cable standard developed by the American National Standards Institute (ANSI) X2T9.5 committee. It operates at 100 Mbits/sec and uses a dual-ring topology that supports 500 nodes over a maximum distance of 100 kilometers (60 miles). Copper-wire connections are supported, but distances are greatly reduces. The dual counter-rotating rings offer redundancy (fault tolerance). If a link fails or the cable is cut, the ring reconfigures itself, so it can continue transmitting network traffic.

FDDI is an excellent medium for building backbones. Local are networks (LAN) segments attach to the backbone, along with minicomputers, mainframes, and other systems. Small networks that consist of a few LAN segments will probably benefit more from a coaxial Ethernet backbone. Large networks with many LAN segments and heavy traffic produced by high-performance workstations, graphics files transfers, or other internetwork traffic will benefit from FDDI.

Stations attached directly to the FDDI cable have two point-to-point connections with adjacent stations. In the dual-ring configuration, one channel is used for transmission while the other is used as a backup. Some stations, called dual-attached stations (DASs), are attached to both of these rings. Single attached stations (SASs) are connected through a concentrator that provides connections for multiple SASs. One of the advantages of this configuration is that a failed SAS cannot disrupt the ring. Also, most SASs are user workstations that are shut down often, which would disrupt the ring if directly attached to it.

Emerging multimedia and real-time applications have special transmission requirements, based on their time-sensitive nature. Delays in the delivery of packets in a real-time video transmission can make the video appear jerky to the viewer. When some packets are delayed and others arrive on time, the delayed packets are simply dropped. The token-passing nature of FDDI and its variable-length packet structure do not provide the uniform data stream required by live video. These problems are solved through various methods. FDDI now has three transmission modes. The first two modes, asynchronous and synchronous, are available in the original FDDI standard. The third mode, circuit-based, can provide dedicated circuits. This mode is available in the new FDDI-II standard, which requires new adapter cards.

The FDDI-II standard is designed for networks that need to transport real-time video or other information that cannot tolerate delays. FDDI-II requires that all nodes on the network FDDI-II network use FDDI-II, otherwise the network reverts to FDDI. Existing FDDI stations should be attached to their own networks.

FDDI-II uses multiplexing techniques to divide the bandwidth into dedicated circuits that can guarantee the delivery of multimedia traffic. It can create up to 16 separate circuits that operate at from 6.144 Mbits/sec each to a maximum of 99.072 Mbits/sec. The reason for this variation is that bandwidth is allocated to whatever station needs it. Each of these channels can be subdivided further to produce a total of 96 separate 64-Kbits/sec circuits.

These channels can support asynchronous or isochronous traffic. Regular, timed slots ion the ring are allocated for the transmission of data. Prioritized stations use the number of slots they need top deliver their data on time. If slots go unused, they are reallocated immediately to other stations that can use them.

FDDI-II is an emerging standard, but it may not become a widespread networking technology. One reason is because it is incompatible with the existing FDDI design. Another reason is that emerging Asynchronous Transfer Mode (ATM) equipment is more appealing to some as a networking technology for high-traffic and time-dependent traffic loads.

4. Indicate advantages and limitations of the frame relay approach from the perspective of your workplace. Identify some concepts found in frame relay that are also included in ATM.

Answer:

Frame Relay is a packet-oriented communication method for connecting computer systems. It is primarily used for local area network (LAN) interconnection and wide area network (WAN) connections over public or private networks, Most of the public carriers are offering Frame Relay services as a way to set up virtual wide area connections that offer relatively high performance. Frame Relay is a user interface into a wide-area, packet-switching network that typically provides bandwidth in the range from 56b Kbits/sec to 1.544 Mbits/sec rates are emerging. Frame Relay grew out of the Integrated Services Digital Network (ISDN) interfaces and was proposed as a standard to the Consultative Committee for International Telegraph and Telephone (CCITT) in 1984. The American National Standards Institute (ANSI)-accredited T1S1 standards committee in the United States also did some of the preliminary work on Frame Relay.

Most of the major carriers such as AT&T, MCI, US Sprint, and the Regional Bell Operating Companies (RBOCs) are offering Frame Relay. Connections into a Frame Relay network require a router and a line from the customer site to a carrier's Frame Relay port of entry. This line is often a leased digital line like T1 but depends on the traffic.

There are two wide area connection methods employed:

a. Private network method, where each site will need three dedicated (leased) lines and associated routers to connect with every other site, for a total of six dedicated lines and 12 routers.

b. Frame Relay method, which is implemented in my workplace, too. In this public method, each site requires only one dedicated (leased) line and associated router into the Frame Relay network. Switching among the other networks is then handled within the Frame Relay network. Packets from multiple users are multiplexed over the line to the Frame Relay network where they are sent to one or more destinations.

A permanent virtual circuit (PVC) is a predefined path through the Frame Relay network that connects two end points. The Frame Relay service provider allocates PVCs as specified by customers between designated sites. These channels remain continuously active and are guaranteed to provide a specified level of service that is negotiated with the customer. Switched virtual circuits were added to the Frame Relay standard in late 1993. Thus, Frame Relay has become a true "fast packet" switching network.

The following management features and services are available on Frame Relay networks:

• Virtual Circuity Status Messages. This service provides communication between the network and the customer. It ensures the PVCs exist and reports on deleted PVCs.

• Multicasting. This optional service lets one user send frames to multiple destinations.

• Global Addressing. This optional service gives the Frame Relay network LAN-like abilities.

• Simple Flow Control. This optional service provides the XON/XOFF flow control mechanism for devices that require flow control.

5. What is SMDS? How does SMDS affected the development of ATM?

Answer:

Switched Multimegabit Data Service (SMDS) is a local exchange carrier (LEC) service that provides a way to extend local networks in a city-wide area. SMDS was developed by Bellcore and is offered as a service by LECs in many metropolitan areas. Note that SMDS uses the same fixed-size cell relay technology as Asynchronous Transfer Mode (ATM) and carriers are offering SMDS as a service that runs on top of them.

As a switching technology, SMDS has advantages over building private networks with dedicated digital lines such as T1. Customers set up one line (of appropriate bandwidth) into the LEC's SMDS network, rather than set up the lines between all the sites that need interconnection. It is a connectionless, cell-based transport service that can provide any-to-any connections between a variety of sites without a call setup or tear-down procedure. This provides the ability to extend LAN-type communication techniques over metropolitan areas. Once information reaches the SMDS switching network, it is directed to any number of sites.

The switching technology of SMDS can provide customers with connection options that accommodate changing business needs. The cost of the service is typically based on a flat monthly fee, but it is ideal for customers that need to switch connections among many sites. Because the local exchange carriers are the primary providers of SMDS, there is usually no competition for the service in some areas. However, customers might want to weigh the use of SMDS against other switching services such as Frame Relay.

SMDS is one of the "fast-packet" technologies that leaves error-checking and flow control procedures up to the end nodes. If a packet is missing, the receiving node requests a retransmission. The network itself is not burdened with this type of error checking. While this places more work on end systems, it takes advantage of the fact that modern transmission facilities have few errors.

SMDS, same as ATM and Frame Relay, is a "fast packet" technology and therefore its development is strongly inter-related to the development of the other two.

6. Briefly explain the concept of congestion in relation to ATM. What is the purpose of congestion management?

Answer:

Logical connections in ATM are referred to as virtual channel connections (VCC). A VCC is analogous to an virtual circuit in X.25 or a frame relay logic connection. It is the basic unit of switching in B-ISDN. A VCC is a set up between two end users through the network and a variable-rate, full duplex flow of fixed-size cells is exchanged over the connection. VCCs are also used fro user-network exchange (control signaling) and network-network exchange (network management and routing).

For ATM, a second sublayer of processing has been introduced that deals with the concept of virtual path. A virtual path connection (VPC) is a bundle of VCCs that have the same endpoints. Thus all of the cells flowing over all of the VCCs in a single VPC are switched together.

When an ATM network becomes congested, cell may be arbitrarily discarded (the new nodes are responsible for retransmitting them), or discarded based on customer preference. For example, customers can designated traffic that is usually not critical to the operation of the business as discard-eligible (DE). Flagging cells with DE is done either by a router or the cell (ATM) switch. Using DE provides a way to ensure that the most important information makes it through and less important information is retransmitted when the network is not so busy.

The virtual path concept was developed in response to a trend in high-speed networking in which the control cost of the overall network is becoming an increasingly higher proportion of the overall network cost. The virtual path technique helps through the network into a single unit. Network management actions can then be applied to a small number of groups of connections instead of a large number of individual connections.

7. Describe criteria you would use to determine the suitability of ATM for accommodating application performance requirements in your workplace.

Answer:

There are a number of different ways to define the application performance in my workplace. First of all, JPL - the organization I belong to - is a large one, comprised of numerous groups and projects that can be considered the "application." Each of these applications have their own set of (specific) performance requirements in terms of computer networking needs. The suitability of (Asynchronous Transfer Mode) ATM, or in general, a high speed, high capacity, multi-protocol, multi-type of carried data/information (text, voice, sound, video, graphics, facsimile) is the sought solution. The following list is generic enough to encompass most of the existing and envisioned "applications" (projects).

Performance requirements are defined in two areas, delivered capability, the inherent capacity and the capacity planning of the installed media which may be exploited in the future.

a. Delivered Capability

ATM (or equivalent network technology) shall provide the following initial bandwidth and performance capabilities at the time the system is operational:

i. Support existing 10 Mbs Ethernet and 100 Mbs FDDI networked devices;

ii. Provide dedicated 10 Mbs Ethernet connectivity to all network devices which can utilize a majority of the delivered bandwidth;

iii. Provide dedicated 100 Mbs Ethernet connectivity to network servers and high speed workstations which can utilize a majority of the delivered bandwidth.

b. Inherent Capacity

ATM (or equivalent network technology) shall provide a network infrastructure with the following capacities that will allow for future growth:

i. A network cable system capable of gigabit delivery to each networked structure at JPL and capable of a minimum of 155 Mbs delivery within each building;

ii. Provide a network architecture capable of supporting Asynchronous Transfer Mode (ATM) switched networks.

c. Capacity Planning

ATM (or equivalent network technology) shall provide a capacity planning system that includes the following:

i. Collect and maintain network usage metrics;

ii. Produce utilization reports;

iii. Provide analysis tools to determine potential network capacity problems;

iv. Provide a capability to model any proposed changes to the system and provides for the identification of bottlenecks and potential bottlenecks and forecasts the need for network changes.

Other (performance) requirements should include:

a. Setup and installation easiness;

b. Speed (day vs. night);

c. Manageability;

d. Troubleshouting and error recovery;

e. Price.

Task 3

Write a 7 to 8 page report detailing procedures and techniques for implementing a wireless LAN in your workplace. Be sure to indicate a discussion of the merits and drawbacks of wireless LAN use and specific examples of wireless LAN applications that will be supported. A brief list of references should be included as well.

Answer:

Definitions, Procedures, Techniques

What is a wireless LAN? At its most basic level, it is a collection of receiving hardware devices, enabling drivers and network operating system software. But many see it as an unfamiliar infrastructure that requires new training. In actuality, the details of wireless local-area networking are simple conceptually. And fortunately, few components are needed to implement a working LAN. For most part, the equipment installation is painless.

Generally, a wireless LAN can be characterized as a data communications system implemented as an extension to or an alternative for a wired LAN. An access point is a transceiver type device, connecting to the wired network from a fixed location using standard Ethernet cable. An access point can support a small group of users and can operate efficiently within a range of 100 to 800 feet, sometimes more, depending n the system and the environment.

Access to the wireless LAN is through wireless LAN adapters, which are implemented in a PC-card format for laptops r an ISA format for desktop computers. Or they can be fully integrated within handheld computing devices such as bar-code scanners.

Wireless radio-frequency LAN systems come in two popular offerings in the United States: 2.4 GHz and 800 MHz spread-spectrum technologies. According to GIGA, a market research firm, the total installed base of 2.4 GHz and 800 MHz cards was 332,000 in 1995. That number is expected to grow to 2.865 million by 1999. The industry momentum has been toward frequency hopping, spread-spectrum systems for 2.4 GHz wireless LANs.

Radio frequency in-building wireless LAN products are designed around radio technology and protocols using spread-spectrum modulation techniques (direct sequence or frequency hopping). Spread-spectrum transmission technology is used for two reasons: it is largely immune to interference and, in some cases, it allows many users to broadcast on the same frequency.

Spread-spectrum wireless LANs apply either a direct-sequence or frequency-hopping approach to transmission. Frequency hopping continually changes the frequency used to transmit data among separate channels - the transmitter stays on a single frequency for about one-tenth of a second. Direct sequence uses modulation codes to spread the signal over the available spectrum. The main components of the wireless local-area network include:

a. The wireless LAN interface card or adapter. The wireless LAN interface card fits into the expansion slot of your microcomputer, while wireless LAN adapters today typically come in the PCMCIA from factor and fit into the PC card of a laptop or personal digital assistant.

b. The access point. The assess point is the central element of the wireless LAN, and when employed in the 10Base-T Ethernet network, is typically seen as just another hub. You will need ODI and NDIS drivers to operate with the network operating system like NetWare or Windows for Workgroups, and NDIS 3.1 for Windows 95. Card and socket services make it simpler to identify and configure PC cards (PCMCIA cards).

c. System-level software. Other software included in the package may be site survey tools or roaming software. There is another level of software in top of the driver software, that controls the behavior of the product in areas of roaming and load balancing, but in general roaming is a characteristic of the access point.

Products, Markets and Vendors

Today, wireless access points and network interface cards are primarily sold by the same vendor. If you want to switch PC cards from one vendor with access points of another, you might be hard pressed to find a product. The equipment does not interoperate unless you are dealing with a vendor such as Proxim that has a substantial base of OEM versions out there. But, some unique features may have been added to the OEM version, so they are not all 100 percent identical either. Several alliances to develop interpretable and higher-performing products are expected throughout the networking industry before standards are stabilized, perhaps by the end of this year.

There is very little difference between managing a small- or medium-sized wireless network and managing its wired equivalent. Although network managers may think that managing disconnected nodes is more difficult, it is actually very similar to handling connected wired nodes. Many of the wireless LAN systems on the market today interface with the popular network management systems such as Hewlett-Packard’s HP OpenView. With almost any network management system, you can gain the same types of utilization statistics, on a wireless node that you would get on a wired node. Data throughput statistics, utilization, and capacity or related capacity measures can be obtained.

Products that do not have a standard SNMP agent could easily be interpreted by a network engineer. An experienced network engineer who has been exposed to a variety of network management software packages and monitoring tools should have no problem.

Site planning is an arduous task by itself, left alone trying to cope with a new technology with which you are not familiar. Site planning with wireless LAN involves getting outfitted with site preparation worksheets and site survey software tools hat help you assess where wireless coverage is required. Coverage can be needed across one floor of an office building, or three floors of an office building, or down only one corridor.

In the case of some equipment, planning is imperative, because it will help determine which antennas to mount and where to mount them. If you find throughput fading, an extra access point or repeater is recommended. The choice of a configuration depends on your environment and application. Some systems come with utilities that determine signal strength and aid in the placement of access points, situating users a certain distance from each other (in the case of two wireless desktops communicating to each other). For a large installation involving over 100 users, a vendor representative will typically offer to do a site survey. But since it is not a highly scientific task, you might take it on yourself and save your company thousands of dollars.

Minimal support is required after the sale of a wireless LAN. But many vendors provide technical support by phone. Some of the persistent problems and issues usually revolve around:

a. Installing PCMCIA card-socket services software. Sometimes the software is installed, but the laptop still won’t detect the card. This happens because some laptops require a customer to manually provide an Interrupt setting for a PCMCIA controller.

b. Network driver configuration problems.

c. Optimum placement of the access points and wireless station adapters.

d. How your specific roaming implementation works.

Applications

Wireless LANs extend computing where wired systems never tread. With its mobility and flexibility, wireless is today’s solution to stretching your networking into previously "hostile" environments. Indeed, the advances made in wireless technology have overcome all kinds of obstacles - whether they are on the manufacturing floor, trading floor, or office floor. By incorporating wireless PCs and modems into existing wired networks, LANs can reach just about everywhere.

An increasing number of sites with environments hostile to traditional wired networks are now enjoying the advantages of close-at-hand computing power thanks to the strides made in wireless technology. Factory facilities have been particularly hard to reach with the office LAN. Until recently, it’s been impractical to bring computing directly into large production-line floors because of traffic, floor size, safety concerns, physical obstacles, and environmental conditions - not to mention the overwhelming cost of dealing with these problems. But wireless LANs are changing this. Wireless computing offers manufacturers an economical means to extending their LAN’s reach onto the production line floor. The mobility, comparatively low-cost and quick installation vs. cable, and the ability to sidestep many physical limitations and safety issues makes wireless technology today’s choice in many manufacturing environments.

A number of progressive manufacturers are already reaping the benefits of wireless LANs. Process-control information available immediately at the production line is ensuring that specification tolerances are being met. By ensuring statistical operational data, monitoring the manufacturing processes and initiating machine adjustments on the spot, quality control inspectors are assuring quality standards.

Wireless devices can operate n the most demanding quarters and can work where wired computers can’t. With them we’re are able to extend the LANs reach right into the hands of the people operating the production equipment. Using the wireless approach its’ not uncommon to bring the LAN down, so to speak, to the production lines in the manufacturing environment for example, alleviating the continual running back and forth between the production floor and the supervisors’ offices to check drawings, specs, and other job related data. With the LAN reaching to the factory floor, production line personnel are able to check on manufacturing specifications, enter calibration measurements and view other job data without leaving the shop area.

Wireless Bridges

But to go further than across the hall or beyond the manufacturing floor for instance, requires an extension to a classical wireless LAN. Products in the category of wireless data bridges that allow information exchange between LANs approximately one-quarter mile to 25 miles away from each other (depending on the terrain) are the solution. With many bridges, it’s easy to add sites or capacity since full-bandwidth links can co-exist in the same geographical area. These products are subject to different obstacles than their in-building counterparts, and there are a variety of products to choose from, depending on your specific obstacles. Thousands of U.S. companies are using wireless solutions as an alternative to T1 link. If you lease a T1 link, it is probably operating overhead your budget. Another solution is to purchase a wireless bridging system and treat it as capital expense, depreciating your investment over time.

In wireless bridging there is a very predictable point-to-point or point-to-multipoint link. It is fairly static: bridge A talks to bridge B. Filtering takes place at both ends, because of a lot of unnecessary traffic running on precious radio bandwidth would cause slow time. Wireless bridging is less difficult than wireless LAN networking because in wireless LAN networking you are talking about intra-building communication and you have more variables. You may, at any time, be talking to no users, then to about 50 users at the same time. And these users may move in or out of the coverage area of an access point. You also have to deal with mobile clients that go into sleep mode or are handed over to another access point in another part of the network.

While some companies are using point-to-point for campus-area networks or metropolitan-area networks, other companies feel more comfortable using it as a backup system. If you are just shooting across the street and don’t need the type of ultrahigh speed that a microwave system would provide a 1-to-2 Mbps product may be the right choice. Systems integrators have seen wireless bridging installations more than double in the last couple of years. Most of the business has been in putting the wireless bridging product as a primary replacement for a 56 Kbps leased line or a T1.

Market Trends

Wireless local-area networking (LAN) is a burgeoning market. Between 1993 and 1995, wireless LAN industry revenues rose from $66.4 million to $157 million, according to a study by the Yankee Group. However, to date, wireless LAN growth has been largely limited to vertical markets, primarily the retail, financial, education and healthcare sectors. In these environments, hand-held information-processing equipment, used in tandem with wireless LANs, provide users with realtime equipment access to their networks.

Wireless LANs, traditionally relegated to vertical niche industries, are becoming more flexible and less expensive. The 802.11 standard is almost ready for prime time. Interoperability testing is on the upswing. Will it be enough to allow wireless to make a move into the corporate mainstream?

Slowly but surely, wireless is expanding into new niches. Analysts are forecasting a higher penetration of wireless LANs into the office environment, for example, and continued growth in related markets such as wireless data collection, telemetry and other limited-function wireless LAN environments.

A critical factor driving vertical markets up to this point is cost reduction - both hard dollar and soft dollar savings - resulting from the installation of wireless technology. Instead of having to run wire through the building or walls, moving to a wireless LAN approach has made a lot of sense fro many companies. Another critical driver is the continued movement toward having a mobile computer rather than a desktop computer network managers are putting in microcellular, wireless LANs to allow their staff to move around inside of a building, and across a campus in some cases, to stay connected to the network while their are moving.

Bibliography

Davis, Peter & McGuffin, Craig. (1995). Wireless Local Area Networks. New York, NY: McGraw-Hill Publishing Company.

Day, Michael. (1992). Enterprise Series: Downsizing to NetWare. Carmel, IN: New Riders Publishing.

Stallings, William. (1990). The Business Guide to Local Area Networks. Carmel, IN: Howard W. Sams & Company.

Task 4

Write a 7 to 8 page report on strategies for implementing the client/server computing paradigm in your work environment. Be sure to include a rational for client/server deployment, a discussion of projected guidelines. A brief list of references should be included as well.

Answer:

Strategies

One of major efforts underway in my workplace (JPL) is the so-called "Enterprise Information System." This effort consists in defining an architecture that will facilitate for a unified framework that will allow numerous organizations, groups and projects share a common development environment for their application needs. One of the many "layers" of technologies that will be deployed is client-server.

The purpose of this architecture is to recommend an Enterprise Information System for JPL. The heart of this architecture is a set of infrastructure services that enable enterprise-wide distributed, client-server computing. By formally structuring and stating a framework for enterprise systems, it is intended that the critical information components of JPL's core business processes will evolve into easier-to-use, more functional, and more cost-effective systems.

The need for such an architecture statement is apparent on two fronts:

i. A growing awareness of poor interoperability and duplicative efforts on the part of the program offices that fund information systems development, and

ii. The independent conclusions of the major reengineering activities under way at JPL.

Information systems and technology will play critically important roles as new business models and processes emerge at JPL. Shorter life cycles, lower costs, and greater cooperation with industry and academia will require new flexibility, responsiveness, and capabilities from the laboratory's support systems, especially information systems.

A diverse, research- and development-centered environment presents special challenges for the information system architecture. No single-vendor, integrated vertical product suite is likely to meet all mission-critical requirements, though offering simplified deployment, operations, and training. The challenge for JPL is to manage diversity, striking a balance between the richer capabilities available in a heterogeneous environment and the attendant complexity, cost, and opportunities for incompatibility.

The key concept in striking the balance is Standards. By understanding and selecting the

appropriate open industry specifications, JPL can define a framework within which domain experts can choose superior solutions, while still maintaining a degree of vendor independence and laboratory-wide compatibility. If the standards chosen are not only technically applicable, but also products of an industry consensus process, then the laboratory will ally itself with other major consumers of information technology. This will position JPL within a market vastly larger than its own information technology budget, ensuring greater vendor stability and smoother transitions to new technology.

It is not sufficient, however, to simply anoint standards. The laboratory must enact its

recommended practices in information services: institutionally-provided capabilities, chosen for broad applicability, defined by standards, and delivered with a level of commitment that leads to user confidence. This confidence, in turn, will encourage users to rely upon, build upon, and exploit the services. When service exploitation is widespread, costs can be lowered through economies of scale. Of vastly greater benefit, however, is the emergence of common solutions to common problems laboratory-wide, allowing groups to work together in ways not previously imagined.

The Enterprise Information System Architecture recommends that JPL invest in a set of critical infrastructure information system services. These services have been selected by a variety of criteria: applicability to JPL's environment, endorsement by a broad consensus of industry (both vendors and consumers), and maturity of a governing open specification. In some cases (for example, data interchange service), the service may consist simply of commitment, expertise, and software to encourage preferred ways of doing business. In other cases (for example, network service), there is an operational component. That is, the service is actually delivered by the institution to its customers, involving hardware, software, procedures, and staffing.

The recommended infrastructure services are:

a. Network Service. The basic communication service upon which other distributed services are built.

b. Remote Procedure Call Service. Recommended practices, software, and expertise for low-level program-to-program communication.

c. Directory Service. The service by which information system resources are located at runtime. Resources may include servers, databases, printers, people, files, etc.

d. Time Synchronization Service. A largely transparent but necessary method for ensuring that the internal clocks of computers on the network remain synchronized.

e. Security Service. A general mechanism to provide proof of identity for both people and servers, to provide authorization based upon identity for remote operations, and key management services for private- and public-key encryption schemes.

f. Systems Management Service. Technology, policies, procedures, and workforce to provide more cost-effective installation, configuration, operation, and management of distributed systems.

g. File Service. Laboratory-wide file sharing service.

h. Messaging Service. Electronic mail, bulletin board, and real-time notification services.

i. Data Access Service. Institutionally-operated high-quality database services, available for storage and query of institutional data.

j. Windowing Service. Recommended practices, software, and expertise for developing client/server applications in the remote presentation model.

k. Data Interchange Service. Recommended practices, software, and expertise for exchange of mission-critical institutional data, including electronic and mechanical computer-aided design files, documents, and images.

l. Software Development Framework. Recommended practices, tools, libraries, training, and expertise to encourage development of a high-quality information system that conforms to architectural recommendations.

The Client/Server is a three layer computing and architectural paradigm: the Presentation, Functionality(Logic/Business), and Data Layers These layers can in any combination of one, two or all three reside on a variety of machines, from a PC to a Mainframe. The modern approach is to have the Presentation layer on a less powerful "front-end" machine such as an end-user’s PC or Macintosh, or even a "dumb terminal", the Logic/Functionality layer on a "middleware," server type of machine, and the Data layer on a "back-end" mainframe.

Benefits

Specifically, the client/server component of this overall proposed architecture will provide a scaleable development and operational framework, that will respond to top level requirements, and have benefits such as follows:

a. Provide flexible application topologies.

b. Provide portable applications.

c. Facilitate for version control.

d. Load balancing and cross-over facilities.

e. Create compiled code on the server.

f. Work with a variety of resources managers.

g. Work with a variety of middleware.

h. Team development and a group repository approach.

Cost-effective distributed computing requires a base of enabling infrastructure services.

Implementing infrastructure services is not in itself a new concept for JPL, but the scope of the proposed services and the degree of interdependence among them will dictate new approaches to implementation, and new priorities about how precious resources are allocated. JPL’s future depends on meeting the challenge.

The challenges facing enterprise architects, systems designers, project managers and Information Systems (IS) executives as they move ahead into enterprise client/server are far-reaching, dynamic and formidable.

On of the key requirements for an enterprise client/server architecture deployment is to make a case for an integrated Third Generation Language/Fourth Generation Language (3GL/4GL). Most of the existing application are written in 3GL and being labeled as "legacy" applications. The 4GLs are maturing rapidly with quite a bit of R&D money being thrown at them. New 4GL functionality and capabilities such as compiled (vs. interpreted) executables is beginning to blur the distinction between 4GL and 3GL. Object-Orientation (OO) looms large in the next version of ANSI COBOL, for instance. However, OO’s proper use, syntax and semantics has delayed an industry-wide ANSI-standard OO COBOL dialect for years. The other major 3GL in wide use here at JPL is FORTRAN. The same issues as above mentioned on COBOL are relevant to FORTRAN. FORTRAN 90, an OO version of FORTRAN 77 (last major industry standard) encompasses many of the standard OO capabilities that can be found in C++ or SmallTalk. However, the maturity and proper use of this capabilities are the real issue at stake.

Some people like change and equate new to improved. Others distrust change and believe the new is merely new (as in trendy), while better (improved) is held to a higher technical and financial (measurable cost/benefit) standard. The degree to which individuals and organizations are ready to embrace technological change is personal. Since the 1985, change has dominated the IS marketplace. The rate of change is accelerating towards warp speed in the current IS market.

One of the challenges of a new architecture requirements, is the decision, evaluation and deployment of needed client/server tools. Here at JPL, this is no exception as part of this effort. The following is a list of associated issues uncovered in during this effort:

a. New technologies force companies to modify existing application development methodologies.

b. New methodologies force companies to purchase new products, which force companies to add new operating systems and hardware platforms.

c. New GUI and visual development tools are introduced - the role of GUI (Graphical User Interfaces) products in use changes.

d. New GUI and DBMS (Database Management Systems) packages force data conversion and migrations.

e. Database sizes grow, forcing customers to scale up to different DBMSs, or different operating system platforms.

f. The number of concurrent users for an application grows, forcing application process to scale (inevitably up), sometimes forcing rewrites using faster more efficient languages.

g. Application functionality grows in size, scope and complexity, causing replatforming redeployment from desktop systems to network-hosted. from network-hosted to mainframe-hosted in order to meet minimum performance levels.

h. many applications created using yesterday’s proprietary software solutions get abandoned, rewritten, rehosted, converted or relegated to production support-only status.

i. New languages and methodologies force rewrites and retraining.

j. New products - even product upgrades - dictates the choice of operating systems and platforms.

And all this is all change for technology’s sake. Lost somewhere in the above list is change due to the fundamental business revolution that has been taking place for the last 10 years, which includes globalization, personnel rightsizing and increased competition. The rate of change within a shop’s Information Technology (IT) architecture, including communications, tools, languages and platforms has accelerated; from years to months and from months to weeks. Systems integration is no longer a one-time job for outside consultants and vendors. Systems integration is an ongoing and critical component of client/server success, when attempted at the enterprise level, and must be picked up internally by technicians intimately familiar with company’s network, communications, hardware and software IT infrastructure.

And yet in spite of this epidemic of change and commercial IS revolution, less is fundamentally different than might appear from the headlines in the trade journals. Independent industry analysts, based on Fortune 500 surveys taken in mid-1995, much to everyone’s surprise, report that over 75 percent of new applications are still being developed (not maintained...developed!) in 3GLs (COBOL is 63 percent and C is 14 percent (business applications only, for engineering applications the overwhelming proportion is probably held by FORTRAN) - Garner Group/Application Development and Management Survey, August 25, 1995). Garner Group estimates that over 80 percent of production data is still in non-relational DBMSs. Surprisingly, this comes as no revelation to may IS technical personnel (programmers, DBAs, etc.). The people most often shocked by this information are CIOs poll takers and vested industry spokespersons who use statistics like these to further admonish and intimidate IS to "hurry up," to "stop missing the boat." The question of which boat is being missing leaves little to the imagination.

The department I work in at JPL provides a wide range of navigation related products and services to flight projects and TMOD (Telecommunications, Mission and Operations Directorate). Its primary responsibilities include:

1) performance of navigation operations for unmanned spacecraft,

2) new mission analysis,

3) solar system modeling, and

4) navigation related research and development.

All work is done using a variety of mainly home-grown computer applications and is supported by a large, dynamic multi-platform computer network.

Central to the department's mission is navigation operations. The process begins with the conditioning and packaging of the real-time data that flows from the spacecraft through the Deep Space Network (DSN) to the local network of workstations. Navigators then use the data to perform maneuver analysis, trajectory analysis, and orbit determination for the spacecraft. This analysis along with computer data files are delivered electronically to external flight project and TMOD customers as well as internal customers. Both inputs and outputs for the process are highly standardized. Operations are strictly scheduled and controlled with personnel frequently under severe time constraints. In recent years, the work has become increasingly automated.

The work of the department is supported by a complex software infrastructure. The infrastructure is comprised of the operational software set, mission analysis software set, solar system modeling software, software tools to facilitate software development, and proof of concept software that supports research activities. It consists primarily of FORTRAN 77 code.

In recent years, this infrastructure has come under attack. Although portable and functionally correct , the aging software is for the most part brittle, inflexible, and user unfriendly. Its effectiveness is hindered by a high level of functional redundancy and the lack of coordination among the groups developing software. Further, the unfriendliness of the software actively decreases the reliability of the system as well as substantially increases the costs associated with the system. Operators are more prone to errors, and errors when they occur are harder and more costly to detect. Recently, a software architect was hired to evaluate the system and coordinate the development of effective, long-term solutions to these problems.

Guidelines

The client/server is a key component of the overall architecture being developed. A number of top level requirements were developed for the proposed client/server architecture tools:

a. How does the tool support interapplication communication?

b. How object-oriented is the tool and does it matter?

c. How is RAD (Rapid Application Development) and formal analysis and design supported?

d. What degree of support is there for version control, configuration management, workgroup development, and software distribution?

e. What execution architecture are supported?

f. How is business/engineering logic specified and generated?

g. How are the Internet and mainframe processing supported?

So, in general and in my organization per se, the challenges facing client/server architects, systems designers, project managers, and IS executives as they attempt to sort out their own unique business or scientific application paradigm from vast amount of technology running in their organization, and to discriminate among the chorus wanting to run their organization, are at unprecedented levels. There are two camps in most shops these days: the client/server militia, consisting of C/C++/4GL programmers and the mainframe militia, consisting of COBOL, FORTRAN programmers. Polarization and intolerance paralyzes this re-architecting effort to move toward a client/server computing environment.

Bibliography

Sayles, J. (1996). "Client/Server Architecture". Data Management Review.

Kara, D. (1996). "Enterprise Application Development Tool Selection - A Review". Software Productivity Group.

Orfali, R. Harkey, D. & Edwards, J. (1994). Essential Client/server Survival Guide. New York: John Wiley & Sons, Inc.