Telecommunications - a Q&A topical approach - Part IV:
Networking in Action

Task 1

1. Highlight the purpose of NP (Network Performance) and Quality of Service (QoS) measurement in relation to ATM.

Answer:

Network Performance (NP) and Quality of Service (QoS) are part of a larger "umbrella" called Network Management (NM). The Network Performance is tightly related to Network Traffic Control. As the load on a network increases, a region of mild congestion is reached, where the queuing delays at the nodes result in increased end-to-end delay and reduced capability to provide desired throughput. When a point of severe congestion is reached, the classic queuing response results in dramatic growth in delays and a collapse in throughput. It is clear that there catastrophic events must be avoided, which is the task of congestion control. The object of all congestion control techniques is to limit queue lengths at the frame handlers so as to avoid throughput collapse.

There are three general traffic control mechanisms: Flow Control and Congestion Control and Deadlock Avoidance. Flow Control is concerned with the regulation of the rate of data transmission between two points. The basic purpose of flow control is to enable the receiver to control the rate at which it receives data, so that it is not overwhelmed. Typically, flow control is exercised with some sort of sliding-windows technique.

A quite different type of traffic control is referred to as Congestion Control. The objective here is to maintain the number of packets within the network below the level at which performance falls off dramatically. A problem equally serious to that of congestion is Deadlock, a condition in which a set of nodes are unable to forward packets because no buffers are available. This condition can occur even without a heavy load. Deadlock avoidance techniques are used to design the network in such a way that deadlock cannot occur.

The ATM standard promises to offer quality of service (QoS) connections from end to end. Although many ATM products vendors promise this remarkable feature, few have delivered on their promises. Recently, there has been much talk about similar protocols over Ethernet, which may have some people wondering why they should bother to invest in ATM. The Resource Reservation Protocol (RSVP) and Real-time Transport Protocol (RTP) let applications request an amount of bandwidth to be set aside for a given application. As traffic starts to flow on a frame-based network, data is queued in a first-in, first-out (FIFO) buffer. Since the frame size of Ethernet can range from 64 bytes to about 1.5 Kb, there is no guarantee that the rate of delivery will be consistent. If two bandwidth-reserved frames come into an Ethernet switch at the same time, the one entering the buffer first will be transmitted first. Because the whole frame must be transmitted, the second stream has to be buffered behind it.

Contrast this technology with a cell-based switch. With 53-byte ATM cells, multiple priority queues can be set up to allow different qualities of traffic to be balanced equally while maintaining a constant rate of flow. Because the queues can be serviced independently, individual cells can be serviced alternatively, allowing the ATM switch to maintain a true QoS to both destinations.

With Ethernet, the entire frame must be sent before the second queue can be serviced. Thus, though bandwidth has been reserved, true quality of service is lost because there is no fixed frame size for a given packet.

Quality of Service over ATM is broken up into several different classes. Rather than attempting to grab a certain amount of bandwidth, ATM devices can negotiate for the best-available rate. As network dynamics change, the device can renegotiate its connection for more or less bandwidth, as necessary. These two factors give ATM a clear edge over Ethernet QoS protocols.

2. Discuss the key factors that contribute to the difficulty of achieving effective network management in your work environment. What is a possible solution?

Answer:

Network management can mean different thinks to different users, or to users who need to manage multiple aspects of their networks. These days network management is accomplished through network management packages. On one hand, a network management package can mean a product that lets you view and control the most intimate details of your network operating system. On the other hand, it could refer to controlling the hardware and software suite on the workstations attached to your network, letting you check for unauthorized software, viruses and even distribute new software.

Clearly, for a large enterprise, both functions are important. managers need to keep minute-by-minute tabs on their servers to avoid conditions that could lead to mission-critical resources becoming unavailable. Of course, managers also need to know what users are up to, so they can make sure they will be able to keep using the latest software while they also avoid problems using the wrong software, and perhaps creating events that could lead to loss of mission-critical data.

It’s a complex job to manage complex networks, and that’s why it’s important to have products that do a lot to simplify things. Adding to the complication for network managers, in general, network management packages/products are unnecessarily difficult to install, in many cases the same common reasons. Of course, it’s not surprising that network management software is more complex to install and run than some other types of products. After all, these packages are tightly integrated into the network operating system, they must accurately report a number of important and sometimes critical pieces of information, and they must do so in a way that lets one person monitor many clients and servers. Because you only perform it once, a complex install is not the definitive measure of these products; accuracy and usefulness is. Incorrect information and inaccurate documentation should be avoided.

3. Indicate steps for instituting a disaster management plan in your workplace.

Answer:

What is a disaster? According to industry standards Disaster is defined as: "Any unplanned, extended loss of critical business applications due to lack of computer processing capabilities for more than a 48-hour period."

However, the above definition needs some fine-tuning. How many business today suffer severe losses well before 48 hours passed without computer processing capabilities? Many companies who, a few short years ago operated on the premise that recovery within 48 hours was okay, have since narrowed that windows to 24 hours, or to 12 hours, or, in some cases, to 12 minutes.

Today, the price of losing computing or telecommunications capabilities is so hot that disaster for some users is measured in minutes. Take the case of the brokerage firm that was highly dependent on its automated voice equipment to channel trades and other customer services, but had no contingency plans in place should this automated voice system ever fail. Some rough calculations revealed the cost of losing that system:

• Net loss of up to 4% million a day;

• Lose 40% of its customer base within 24 hours, and

• Lose 80% of its customer base within 5 days.

In other words, this brokerage would have been forced to close its doors permanently after losing its voice capabilities for only a relatively short time.

It's probably tempting to think that damage on that scale could be caused only by some cataclysmic event. Unfortunately, disasters come in some pretty ordinary shapes and sizes. On the bright side, however, options for disaster recovery are as the disasters themselves.

Here is a list of possible disasters: acts of God, air conditioning failure, arson, blackouts, blizzards, boiler explosions, bomb threats, bridge collapse, brownouts, brush fires, chemical accidents, civil disobedience, communications failure, computer crime, corrosive materials, disgruntled employees, dust storms, earthquakes, embezzlement, explosions, extortion, falling objects, fires, floods, hardware crash, high crime area, high winds, heats or cooling failure, hostage situations, human error, hurricanes, ice storms, interruption of building services, kidnapping, labor disputes, lightning strikes, malicious destruction, military operations, mismanagement, mud slides, personnel non-availability, plane crashes, power outage, public demonstration, quirky software, radiological accidents, railroad accidents, sabotage, same office building accidents, sewage backups, snow storms, software failure, sprinkler failure, telephone failure, theft of data or computer time, thunderstorms, tornadoes, transportation problems, vandalism, viruses, water damage, xenon gas leaks, yellow fever outbreak, Zombie, attack of the..., etc.

Most of us have probably been around the industry long enough to have seen a list of disasters that can strike computer operations. For years, such lists were confined to disasters that might strike the traditional glass house, the home of the mainframe, which, of course, was the heart of each large computing environment.

Mainframes still are the heart of nearly all computing environments, but critical computing and telecommunications operations, along with the list of potential disasters, have expanded well beyond the glass house. Now, let's consider a few of the changing faces of disaster you might not have thought of:

1. The Technology Assimilation Gap: This is a concept developed by Anderson Consulting which identifies the problems created by technologies that evolve faster than we can assimilate the changes.

2. Multiple Platforms: As recently as a few years ago, it would have been sufficient in the event of a disaster to fly your tapes to a hotsite, commandeer a 600J and restart your operations. You probably can't do that anymore. Today, operations typically have several platforms to recover—everything from mainframes to distributed processors like AS/400, Sun, HPs, to servers, to PCs, to telecommunications systems. It's harder than ever to keep all the balls in the air.

3. Security Breaches are Escalating: Now that most companies are increasing their exposure to security risks through such windows as distributed computing and Internet use, the number and cost of security breaches (already more than $300 million by 1992) is likely to continue to spiral upward.

Guarding against these and others, along with the more traditional forms of disaster, presents a formidable challenge. But it's a challenge you can't afford to ignore. In fact, the price losing computing capabilities is so high that disaster is measured in minutes.

The cost of disaster recovery planning is a key element to consider when planning for this important activity. This is something like: "Can't Live With It, Can't Live Without It." What would the loss of your computing and/or telecommunications capabilities cost your organization? What are your organization's key business functions and the computer systems that support them (e.g., order entry, accounts receivable, payroll, accounts payable, inventory). Next, let's assume that you were no longer able to use the systems because of an unplanned event (e.g., fire, water damage, power outage). Then, estimate the financial impact this loss would have on your business based on how long your systems would be out—one hour, 12 hours, one day, two days, and so on.

Most of the people tend to regard cost of a disaster as equal to only the cost of replacing the damaged or malfunctioning equipment. Thus, if a PC disk crashes, the cost of the "disaster" is merely the cost of replacing the defective drive—probably no more than a few hundred dollars. But there are collateral costs: If a disk crashes, you may also lose the data; you will certainly lose the user's productivity for whatever time it takes to replace the disk; you may lose revenues; and you may hurt your internal or, worse, your external customer satisfaction.

Those are the consequences for a "disaster" at a single workstation, so you can begin to see how system-wide computer or telecommunications downtime can snowball into tens of thousands of dollars or more in a single hour. Losses of that magnitude add up quickly, and even very large businesses that lose their computer capabilities for more than a week go out of business altogether.

So, this disaster preparedness self-evaluation may be a fairly sobering exercise, perhaps even more so if you consider such key risk areas as lost sales, lost revenues from fees and collections, loss of customers or future business; increased expenses from lost discounts, additional personnel, temporary employees, overtime, and data and records recreation; fines and penalties from regulatory agencies, unions, and vendor contract obligations; and adverse publicity. Obviously, there's quite a bit at stake.

In general, companies can expect to pay roughly 2% of their IT budget on disaster recovery—auditing, planning, testing, etc. And while 2% of your budget doesn't seem like much to ensure the survival of your entire business, there's no question that 2% is real money, no matter how big or small your budget may be. The expenditure is big enough on its own, but in recent years it has been magnified by the trend in corporations to downsize, to focus on core competencies, and to trace everything to the bottom line. In fact, companies that use Shareholder Value Added (the practice that tests every dollar spent for maximum return) have been given higher earnings multiples by investors. In this environment, with its intense focus on the bottom line, the Chief Financial Officer and Chief Information Officer can be your biggest allies. And that because better than anyone else they can understand the full ramifications of their decisions: Should we spend money on disaster recovery or leave out business exposed?

Years ago the entire disaster recovery procedure was reduced just to a back-up procedure. Today, the story is much different. There are a number of backup options to choose from—some vendors are getting much more responsive to the market's complex, customized, and rapidly changing needs. Today. customers can choose from a variety of arrangements, among them:

1. Internal reciprocal agreements between departments or data centers

2. External reciprocal backup agreements

3. Facilities purchase at time of disaster

4. Service bureaus

5. Redundant standby facilities

6. Commercial hotsite, warmsite, or coldsite

7. "Quick shipment" of critical equipment

8. Mobile site

9. Replacement equipment.

Top commercial disaster recovery vendors will offer a variety of options. In a "hotsite" environment. for example, you get pre-installed computers, raised flooring, air-conditioning, telecommunications equipment, networking equipment, technical support, and uninterruptible power supplies. These vendors provide a total solution—one-stop shopping, so to speak. As your needs grow and change, these vendors will be prepared to grow and change to meet those needs.

If you prefer to handle the hardware arrangements, you can also reserve "coldsite" space—computer-ready rooms that come with pre-installed wiring and raised floors. A few vendors also offer mobile recovery units, which get their name because they are ... mobile. These are hotsites on wheels and can drive to your site—or whoever you designate. And when they arrive, they're already set up with your custom, preconfigured computer system, an independent power source, office equipment, technical support, and telecommunications equipment.

Leading recovery providers also offer "business recovery centers," read-made workspace with telecommunications equipment. LANs, PCs, and terminals for work groups to connect to a distant hotsite. These can save transportation and lodging costs for large numbers of staff.

Today, you not only have significant choices for how you recover from a disaster, but also for how you prepare to recover from one: Specifically, how you back up your data. In the relatively short history of disaster recovery, users at one time had the single option of backing up their data to tapes and then shipping those tapes to an off-site storage company for safe-keeping. However, end-of-day backups are no longer sufficiently current for many users.

Several years ago, the recovery industry began offering electronic vaulting services—recording transactions at a remote site at intervals from just a few seconds to just s few hours. This technique offered backups that were closer to the point of failure and therefore offered faster access and faster recovery. But the technique was also relatively expensive, putting it out of the reach of most users. That, like many other things ion the industry, is changing, particularly with the rise of Internet, where companies are doing busyness today use distributed computing resources and storage. There are several factors driving the increase in electronic vaulting: recovery windows are getting smaller, telecommunications costs are going down, and electronic data transfer technology is improving.

If you cant' bear to lose even a few hours of transactions, if you need instant recovery, then you need what is called a "mirrored" database. Mirroring is another form of electronic vaulting, but in this case it provides virtually instant recovery because a system's CPU writes simultaneously to two databases. Until recently, mirroring required two CPUs that were both at the same site—or at least they had to be very close to reach other. If one piece of hardware failed, the mirrored system would have your transactions right up to the point of failure; you simply switched from your primary CPU to your mirror CPU. This was ideal for hardware failures, but useless in the case of a regional disaster. Even mirroring wouldn't help in the case of, say, a hurricane that blocked access to an entire area. Both CPUs would be affected by the same disaster.

There are thousands of considerations, large and small, and the degree of thoroughness with which you address them will determine the ultimate success of your plan. In summary, here are the (major) steps to be taken:

Step #1: Where do you start?

You can start by taking the lay of the land. Conduct a business impact analysis; develop your "what if?" scenario. An in-depth business impact analysis will help you pinpoint the areas that would suffer the greatest pain or operational pain in the event of a disaster.

Step #2: Securing Senior Management Commitment

The reality is that until the person sitting ion the corner office says "go," you're not going to go anywhere with your disaster recovery planning. In fact, the number one agenda item for disaster recovery coordinators is securing management's commitment to a robust and realistic business continuity plan.

Step #3: Define your MARC—Minimum Acceptable Recovery Configuration—applies to computer equipment, communications support, furniture, fixtures, the whole shooting match. At the end of day, your plan has to be just three things: Complete, Comprehensive, Current.

Disaster recovery vendors are adapting to the needs of its customers. To relieve the cost burdens of travel and staff time, vendors are developing remote testing solutions for their customers. Today, testing options include:

1. Hotsite

2. Mobile Data Center

3. Remote Site

4. Remote Testing

5. Turnkey Testing.

4. Describe unique security risks associated with wireless LANs and some methods for effectively counteracting these vulnerabilities.

Answer:

A few years ago, companies could protect their sensitive data by simply posting a guard at the door of their computer room. Personal computers toppled this solution, however, when suddenly every employee had one on his or her desk. The next step came with LANs, linking all the PCs together, and client/server systems, linking PCs and LANs to information stored on corporate mainframes. For the most part, however, data didn't leave the building, and IS managers concerned with data security only needed to focus on vulnerabilities associated with dial-up connections.

Security headaches really began when corporations began connecting their systems to the Internet. Untraceable, and often undetectable, attacks and system penetrations were now possible from anywhere in the world. Many companies quickly implemented firewalls to control the data entering and exiting their systems.

Now wireless technologies, offering employees "anywhere, anytime" access to corporate systems, are upping the security ante. In some people's minds, security is still holding them back from using wireless and mobile communications. With the right systems in place, however, wireless technologies can be safe and secure as wireline.

The three pillars of information security are confidentiality, integrity and availability: confidentiality, in the sense of protecting sensitive, valuable and private data; integrity, in the sense of making sure that data isn't lost, corrupted or altered; and availability in the sense that data can be accessed when it is needed.

Confidentiality is probably what is compromised the most. That is, people see things they shouldn't. Much of the time, however, confidentiality is compromised for information that is only mildly sensitive, such as people finding out the salaries of their co-workers. This doesn't compare with breaches which are less frequent, but much more devastating, such as corporate spies gaining access to a company's research and development information.

Fortunately, technologies such as encryption can assure confidentiality and integrity. Encryption not only enables companies to safeguard their data, but it also gives them the ability to verify the source of any data transmission and to verify that the data hasn't changed in transit.

Availability, however, especially where wireless is concerned, is a different story. As soon as we develop technology that's useful, we rely on it. We immediately start to rely on the information being there, and we need to be aware of the implications if, for some reason, it suddenly isn't there. If companies implement a wireless system and come to depend on wireless's ability to make data instantly accessible, then even a delay of ten minutes could cause problems.

Wireless data availability can be compromised when users wander out of coverage or by interference such as weather, RF pollution or the physical configuration of a workplace. malicious jamming or "denial of service," of wireless transmission is also a possible danger. To protect against problems like these, companies and their system integrators need to perform comprehensive sight surveys implementing wireless technology.

Wireless systems should be designed with a backup mode of data transfer. It might not be as convenient as a wireless connection, but if, for example, a sales representative needs to get a critical order to the home office right away, a backup dial-in connection is better than a broken airlink.

Also, applications running in a wireless environment should be "wireless aware," that is, applications designed to work with relatively stable dial-up connections should be re-designed to work with less-stable wireless connections. There has to be a mechanism which will store the current state of a user's session and allow them to resume from that state. Users shouldn't be forced to log in to a system and restart applications of the airlink happens to break.

Since information security is an on-going concern for corporations, there is no such thing as a one time security solution. The following suggestions are to be kept in mind during all current and future roll outs of information technology.

1. Perform risk analysis. Continually take stock of your company's information, especially information that your employees can access wireless or via the Internet. What is it worth to your organization? What would happen if it were corrupted or lost? What would happen if it were unavailable for a period of time? What would happen if your competitors had this information? Balance the answers to these questions against the cost of protecting your data, and develop different levels of protection for different levels of data sensitivity.

2. Make sure your systems are physically secure. You can put in place the strongest encryption technology and systems with built-in security features, but if machines with sensitive information can be simply stolen, such as laptops or unguarded desktops, or directly accessed by unauthorized users, they are not truly secure.

3. Write strong, enforceable security policies, and make sure users agree to them. Budget for security training. Users should know exactly what is and is not an acceptable use of company resources, and they should sign an agreement that states that they have read your company's security policies. For example, users should know that there is some information that should never be discussed over standard, easy to eavesdrop on, cellular phones.

4. Develop security systems and procedures that are as seamless as possible. Part of the reason companies put technology in place is to increase productivity and efficiency. If your security procedures are awkward and time consuming, users may simply ignore them.

5. Be aware of potential interference issues. When using wireless technology in fixed environments, perform a sight survey to determine dead spots and possible causes of interference. In mobile environments, take into account the possibility of interference factors such as weather, buildings, RF pollution, etc.

6. Consider backup methods for transferring data. If, for whatever reason, users can't make a wireless connection to the corporate system, what other methods can they use to transfer data? Is a dial-up or some alternate connection method available?

7. Make sure your software is "wireless aware." In some situations, the airlink between a host and a mobile wireless device can drop much more often then a dial-up connections. Use software that both re-establishes dropped connections and returns the user to the same state they were in when the connection was dropped.

Task 2

Write a 5 to 7 page report delineating strategies for developing a security program for safeguarding network services and applications in your workplace. Be sure to indicate a discussion of active and passive security threats that should be addressed and a brief list of references.

Answer:

Abstract : With the rapid growth in the use of computerized information system, it has become evident that computerized information requires protection like all other organizational assets. The problems related to information security are difficult to resolve, since most information systems are normally built to facilitate data dissemination rather than to limit it. Nowadays more and more information systems are built as distributed information systems in distributed environment. The security issue becomes important. In this paper we will discuss the security measures and information system security plan. At last, we give an example to discuss how to plan in network security. This is only one part of security planning phase. In this planning phase, the plan should include physical security plans and disaster recovery plans. we do not discuss the security implementation phase and other issues.

What is distributed system?

Historically, the relative balance between distributed and centralized information processing has depended partly on the goals of those needing the information and partly on the technology available to implement those goals. When technology is available to serve those needing centralization, the balance tends to shift in the direction. When technology becomes available to serve decentralized goals, the balance shifts in that direction. With current technologies, it makes good economic sense to distribute data collection, storage, and processing toward the end users. This trend also supports a growing demand by users for autonomy, creativity, and decision-making power. A distributed system is best viewed as a hybrid between centralized and decentralized systems.

A completely centralized information system handles all processing at a single computer site; maintains a single central data base; has centralized development of applications; has central provision of technical services; sets development priorities centrally; and allocates computer resources centrally. The system's remote users are served by transporting input and output data physically or electronically.

A completely decentralized system has no central control of system development; no communication links among autonomous computing units; and standalone processors and data bases in various sites. Each unit funds its own information processing activities and is totally responsible for all development and operation.

The common definition of a distributed system implies the spread of hardware and data to multiple sites around an organization, interconnected by a communication network. A wide variety of distributed systems vary along two dimensions: (1) How many activities are distributed? and (2) What is the degree of distribution for each distributed activity? (A completely centralized activity is one that is wholly undertaken by a central information system. A completely decentralized activity is one that is wholly undertaken by a user unit. A distributed activity is one that is somewhere on the spectrum between these two extremes.)

Causes of Risk

Risks to computer systems can stem from various sources, such as nature, human beings, or technology. We will now discuss some of the main sources of risk.

Natural disasters can happen anytime and almost anywhere. These include fire, flood, earthquake, and the like. Any might damage computer centers, communication lines, and data files, or they can hurt key persons without whom it is difficult to operate the systems.

Humans can also generate risks, inadvertently or deliberately. For instance, employees can go on strikes; programmers can overlook bugs; keying-in operators can falsify monetary transactions; competitors can hook up to a communication line.

Technology is a risk factor in itself. A new piece of hardware can be unreliable; a new software package can fail to work properly; communication lines can be of low quality.

Sometimes the organization that uses the systems generates risks by imposing incorrect procedures or by not imposing appropriate procedures.

Risks can be internal — posed by the organization itself (employees, buildings, procedures); or risks can be external — caused by outsiders (criminals, industrial espionage, computer hackers). Risks can be physical and tangible, such as damages to computer sites, or destruction of files. However, risks can be intangible, or at least hard to quantify, such as loss of goodwill, or loss of confidential data.

Where Do We Search for Risks

A comprehensive survey to identify potential risks is necessary. The survey should include the following areas:

1. Personnel. Procedures of hiring, screening, promotion, firing, and resignation; training requirement; quality and qualifications of employees.

2. Physical environment. Location of buildings, neighbors, topographical and weather conditions, power supply, access to the building and to the computer floor.

3. Hardware and software systems. Maintenance agreements, reliability, vendor support, documentation, and backup.

4. Communications. Reliability, backup, security, verification, and control procedures.

5. Operations. Backup procedures, data security, input/output controls, process controls, and administrative controls.

6. Contracts. With vendors as well as with customers.

7. Laws and regulations. Privacy acts, national security, etc.

Security Measures

Very similar to control measures, security measures are also composed of administrative, technical, and human elements. Prerequisites to security arrangements are the appointment of an information security officer and the preparation of a security plan.

Information Security Officer

Similar to EDP auditor, the security officer is responsible for a number of issues: preparing an organizational security plan for information systems; implementing the plan; and controlling the daily activation of the security measures. We will turn now to the security plan and to some prevailing measures.

Information System Security Plan

Information System Security plan is an overall approach to security issues. The plan should comprise the following sections:

1. Objectives and goals

2. Identification of potential risks and assessment of their possible damages

3. Proposal of a set of measures and provisions to reduce or eliminate the risks

4. Economic justification of the proposed measures

5. Steps of implementation, including timetable and resources required

6. Provisions for maintaining the security measures after they will have been established.

The plan is submitted to top management for approval. Management has to realize that approval of the plan entails allocation of resources for implementation and maintenance.

Resources include personnel, hardware and software features, training facilities, and the like.

It is recommended that in addition to appointing a security officer, management will form a permanent committee to govern the security issues.

Information Security System

An information security system is a software package that deals with various security problems. It performs a number of functions; the major ones are here.

1. Identification. The system should be able to identify each terminal by its location; it should be able to identify a communications line that transmits or receives data; it should be able to identify each user by matching codes to passwords.

2. Authorization. The system should determine whether a certain request is allowed when it comes from a certain terminal, on a certain line, by a certain user, giving a certain password, and referring to a certain file or data item.

3. Data security. The system should control the distribution of outputs and ensure that reports are not diverted to wrong locations. It should protect files, maintain catalogs, and delete obsolete data. Advanced systems are capable of performing ciphering and deciphering if so required.

4. Communications security. The system should be able to disconnect any line whenever it suspects that the line is being abused. It should be able to verify lines and terminals. Above all, it should log all the activities initiated by end users.

5. Assistance to the security officer. The system has to provide information to the security officer in terms of exceptional activities, reports on classified processes, statistics, and the like. Moreover, the system should activate alarm calls whenever there is a repetitive illegal attempt to intrude.

Security systems are available in the software market. They are very useful for computer centers based on large mainframes and widely used communications. They are less commonly used in the microcomputer environments.

Contingency Plan

A cornerstone in security arrangements is the contingency plan. A contingency plan deals with the following issues:

1. Emergency plan. The plan tells exactly what should be done in case of emergency, such as fire, flood, or terrorist activity. The plan should detail the files that are to be evacuated, the rooms and floors that should be secured, and who is to do what.

2. Backup plan. Unlike the emergency plan, which is rarely activated, the backup plan is continuously drilled. It deals with routine arrangements made for an emergency situation. These include file backups, computer backup, and training of backup personnel. If the security officer fails to monitor the daily execution of the backup plan, the other plans will not help when they are needed.

3. Recovery plan. The execution of the recovery plan follows the emergency plan. Once files have been saved and backup equipment is available, it is time to restructure the system. However, very often the backup installation lacks the required capacity in terms of computing power, communications lines, etc. The recovery plan sets priorities and determines the sequence of recovery activities.

Note that all the components of the contingency plan should be drilled from time to time, and they have to be modified in accordance with changes in the organization and the environment. If the plan is shelved and not tested periodically, it is most likely that it will not be adequate when needed.

Network Security

Hosts attached to a network, particularly the worldwide Internet, are exposed to a wider range of security threats than unconnected hosts. Network security reduces the risks of connecting to a network. But by their natures, network access and computer security work at cross purposes. A network is a data highway designed to increase access to computer systems, while security is designed to control access. Providing network security is a balancing act between open access and security.

The highway analogy is very appropriate. Like a highway, the network provides equal access for all — welcome visitors as well as unwelcome intruders. At home, you provide security for your processions by locking your house, not by blocking the streets. Likewise, network security generally means providing adequate security on individual host computers, not providing security directly on the network.

In very small towns, where people know each other, doors are often left unlocked. But in big cities, doors have dead-bolts and chains. In the last decade, the Internet has grown from a small town of a few thousand users to a big city of millions of users. Just as the anonymity of a big city turns neighbors into strangers, the growth of the Internet has reduced the level of trust between network neighbors. The ever increasing need for computer security is an unfortunate side effect. Growth, however, is not all bad. In the same way that a big city offers more choices and more services, the expanded network provides increased services. For most of us, security consciousness is a small price to pay for network access.

Network break-ins have increased as the network has grown and become more impersonal, but it is easy to exaggerate the read extent of these break-ins. Overreacting to the thread of break-ins may hinder the way you use the network. Don't make the cure worse than the disease. The best advice about network security is to use common sense. (Common sense is the most appropriate tool that can be used to establish your security policy. Elaborate security schemes and mechanisms are impressive, and they do have their place, yet there is little point in investing money and time on an elaborate implementation scheme if the simple controls are forgotten.)

Network Security Planning

One of the most important network security tasks, and probably one of the least enjoyable, is developing a network security policy. Most computer people want a technical solution to every problem. We want to find a program that "fixes" the network security problem. Few of us want to write a paper on network security policies and procedures. However, a well thought-out security plan will help you decide what needs to be protected, how much you are willing to invest in protecting it, and who will be responsible for carrying out the steps to protect it.

Assessing the Threat

The first step toward developing an effective network security plan is to assess the threat that connection presents to your systems. There are three distinct types of security threats usually associated with network connectivity:

1. Unauthorized access — a break-in by an unauthorized person.

For some organizations, break-ins are an embarrassment that can undermine the confidence that others have in the organization, and intruders tend to target government and academic organizations that will be the most embarrassed by the break-in. But for most organizations, unauthorized access is not a major problem unless it involves one of the other threats: disclosure of information or denial of services.

2. Disclosure of information — any problem that causes the disclosure of valuable or sensitive information to people who should not have access to the information.

Assessing the thread of information disclosure depends on the type of information that could be compromised. No system with highly classified information should ever be directly connected to the Internet, but there are other types of sensitive information which do not necessary prohibit connecting the system to a network. Personnel information, medical information, corporate plans, credit records — all of these things have a certain type of sensitivity and must be protected.

3. Denial of service — any problem that makes it difficult or impossible for the system to continue to perform productive work.

Denial of service can be a severe problem if it impacts many users or a major mission of your organization. Some systems can be connected to the network with little concern. The benefit of connecting individual workstations and small servers to the Internet generally outweighs the chance of having service interrupted for the individuals and small groups served by these systems. Other systems may be vital to the survival of your organization. The threat of losing the services of a mission-critical system must be evaluated seriously before connecting such a system to the network.

Network threats are not the only threats to computer security, or the only reasons for denial of service. Natural disasters and internal threats (threats from people who have legitimate access to the system) are also serious. More computer time has probably been lost because of fires than has ever been lost because of network security problems. Similarly, more data has probably been improperly disclosed by authorized users than by unauthorized break-ins. Here we emphasize network security, but network security is only part of a larger security plan that includes physical security and disaster recovery plans.

Distributed Control

One approach to network security is to distribute responsibility for, and control over, segments of large network to small groups within the organization. This approach involves a large number of people in security. Distributing responsibility and control to small groups can create an environment of small networks composed of trusted hosts. Using the analogy of small towns and big cities, it is similar to creating a neighborhood which to reduce risks by giving people connection with their neighbors, mutual responsibility for one another, and control over their own fates.

Additionally, distributing security responsibilities formally recognizes one of the realities of network security — most security actions take place on individual systems. The managers of these systems must know that they are responsible for security, and that their contribution to network security is recognized and appreciated.

For example, we can use subnets to distributed control. Subnets are a powerful tool for distributing network control. A subnet administrator should be appointed when a subnet is created. The administrator is then responsible for the security of the network and is empowered to assign IP addresses for the devices connected to the networks. Assigning IP addresses gives the subnet administrator some control over who connects to the subnet. It also helps to ensure that the subnet administrator knows each system that is assigned an address and who is responsible for that system. When the subnet administrator assigns a system an IP address, he also assigns certain security responsibilities to the system's administrator. Likewise, when the system administrator grants a user an account, the user is assigned certain security responsibilities.

The hierarchy of responsibility flows from the network administrator, to the subnet administrator, to the system administrator, and finally to the user. At each point in this hierarchy the individuals are given responsibilities and the power to carry them out. To support this structure, it is important for users to know what they are responsible for, and how to carry out that responsibility.

Writing a Security Policy

Security is largely a "people problem." People, not computers, are responsible for implementing security procedures, and people are responsible when security is breached.

Therefore, network security is ineffective unless people know their responsibilities. It is important to write a security policy that clearly states what is expected. and who it is expected from. A network security should define:

1. The network user's security responsibilities

The policy may require users to change their passwords at certain intervals, to use passwords that meet certain guidelines, or to perform certain checks to see if their accounts have been accessed by someone else. Whatever is expected from users, it is important that it be clearly defined.

2. The system administrator's security responsibilities

The policy may require that specific security measures, login banner messages, and monitoring and accounting procedures, be used on every host. It might list applications that should not be run on any host attached to the network.

3. The proper use of network resources

Define who can use network resources, what things they can do, and what things they should not do. If your organization takes the position that e-mail, files, and histories of computer activity are subject to security monitoring, tell the users very clearly that this is the policy.

4. The actions taken when a security problem is detected

What should be done when a security problem is detected? Who should be notified? It is easy to overlook things during a crisis, so you should have a detailed list of the exact steps that a system administrator, or user, should take when a security breach has been detected. This could be as simple as telling the users to "touch nothing, and call the network security officer." But even these simple actions should be in the policy so that they are readily available.

A great deal of thought is necessary to produce a complete network security policy. The outline shown above describe the contents of a network policy document, but if you are personally responsible for writing a policy, you may want more detailed guidance.

Security planning (assessing the thread, assigning security responsibilities, and writing a security policy) is the basic building block of network security, but a plan must be implemented before it can have any effect. In this paper we just discuss the security planning phase and do not discuss security implementation phase.

 

No system is completely secure. No matter what you do, you will still have problems. Realize this and prepare for it. Prepare a disaster recovery plan and do everything necessary, so that when the worst does happen, you can recover from it with the minimum possible disruption.

References:

1. Niv Ahituv, Seev Neumann, and H. Norton Riley. (1994). Principles of Information Systems for Management. 4th ed. Business and Educational Technologies

2. Craig Hunt, TCP/IP Network Administration, O'Relly & Associates, Inc.

3. Simon Garfinkel and Gene Spafford, Practical UNIX Security, O'Relly & Associates, Inc.

4. P. Holbrook, J. Reynold, et al., (July, 1991). RFC 1244, Site Security Handbook.

5. R. Pethia, S. Crocker and B. Fraser, (November 1991). RFC 1281, Guidelines for the Secure Operation of the Internet.

6. David A. Curry. (April 1990). Improving the Security of Your UNIX System, SRI International, Information and Telecommunications Science and Technology Division.

Task 3

Prepare a 15 to 18 page report describing procedures, techniques, strategies, and guidelines for introducing an ATM networking configuration into your organization. Be sure to include a list of references.

This report should address the following topic areas:

1. Introduction. The introduction indicates the scope of the project, a description of your organization's goals and objectives, and the demographics of the user community.

Answer:

A unique need exists in my company, JPL, for multimedia applications. Multimedia applications such as collaborative computing are driven by the desire for higher productivity and the need to consolidate and manage resources scattered among different locations. Videoconferencing is finally becoming a useful tool to tie together widely dispersed workshops, provided the conference is conducted from workers' desktops rather than a large conference room. Companies are harnessing the power of multimedia to communicate with customers more effectively.

ATM will play a pivotal role as the enabling networking technology for those emerging multimedia applications. Existing Ethernet and token-ring networks are great for carrying large, bursty data files between servers and clients. But real-time multimedia applications require a continuous and predictable traffic flow across the network to ensure a smooth delivery of information. Time delay is critical for most video applications.

Although developments are underway to allow IP-based networks to reserve bandwidth more effectively to support delay-sensitive communications, ATM remains the only networking technology that can support real-time video and voice, delivering the required quality of service for each. ATM provides the added advantage of aggregating data traffic along with video and voice on one homogeneous, bandwidth-on-demand network, without degrading the smooth delivery of the video and voice.

Since 1991, technical enthusiasts have viewed ATM (asynchronous transfer mode) as a networking panacea. Vendors and service providers anxiously awaited the opportunity to profit from ATM mass-market adoption. What has happened in those five years and where does ATM stand today? Were expectations too high? ARE we on the verge of another technology disaster or merely moving along in the natural evolution of a technology as it matures through its life cycle?

To get some perspective on this, let's compare the frame-relay adoption curve to ATM. Technology enthusiasts have hyped ATM. Visionaries are now implementing ATM in key industry segments. The mainstream market is watching, although skeptically. Alternative technologies, such as gigabit Ethernet and IP switching, that protect legacy systems, are receiving attention. Industry prognosticators are downsizing their forecasts. Given all of the above, it is time to reset expectations and renew the vision for ATM. Much of what's happening is the typical market acceptance process. But ATM is bigger technological phenomenon and is more complex and potentially more pervasive than frame relay. Correspondingly, there are greater technical and business challenges to moving along the market adoption curve.

2. Needs assessment considerations. Needs assessment considerations can include internetworking requirements, applications requirements, user requirements, and budget.

Answer:

Some multimedia applications already have been adapted to an ATM network infrastructure. These include desktop-video collaboration, distance learning, news and entertainment video distribution (video-on-demand), multimedia kiosks, and medical imaging.

Desktop-video collaboration (DVC) is the fastest growing multimedia application. It includes application sharing and videoconferencing. DVC significantly improves worker productivity by streamlining the decision-making process, thereby reducing time to market for new products and services. Face-to-face meetings by key decision makers, that today often require transoceanic air travel and considerable downtime, are being replaced by high-quality videoconferences from desktop to desktop.

Delivering multimedia applications to employees' desktops, however, is a big networking challenge. Today's computer networks have been optimized for bursty data traffic. Multimedia applications, however, rely upon continuously flowing streams of compressed digital audio and video information. The streaming nature of multimedia applications is at odds with the contention schemes employed by today's computer networks. When the network becomes busy, everything slows down, creating bottlenecks for real-time audio and video.

Distance learning and remote classroom applications extend desktop videoconferencing over a metropolitan or wide-area network. An instructor in one location can teach classrooms of students in remote locations. Typically, the classrooms are equipped with two-way communications so the lessons can be interactive. High-quality video images, low-delay, and point-to-multipoint capabilities are required. In addition, a video archive is often required to replay recorded classes for students who were unable to attend the live class.

Broadcast-quality video delivery bridges the gap between content providers and consumers of professional video programming (such as television stations and cable operators). Video-content providers can upload their video programs directly into the video-information provider's (VIP) video archiver, opening up a vast market of prospective customers for the content provider. A VIP network, can provide a single, convenient source of digital broadcast video to multi-service operators (MSOs), replacing a multitude of other sources of video information such as satellite and microwave feeds, couriers, and express mail delivery.

The VIP network can deliver broadcast quality video. It gives television programmers and news organizations unprecedented control, speed, and economy in obtaining video programming such as late-breaking news stories or live news coverage from around the world. Producers and programmers can search the VIP's video archiver using key words and preview the video program before deciding to download it. Downloading video programming even can occur in real time, going directly into an MSO's head-end in the time slot allocated fro the program.

What makes addressing the ATM question in monolithic terms so difficult is the fact that ATM has potential in so many different markets. These include carriers and CATV multiservice operators (MSOs), Internet service providers (ISPs) and enterprise networks. For the carrier/MSO market, the new rules of competition (i.e., global trends toward privatization/liberalization and telecon reform) are a potent business driver. For the ISP market, today's business driver is solving the bandwidth bottlenecks occurring in the WAN-backbone infrastructure. For enterprise network users, supporting the corporate, strategic competitive advantage is the business driver. Many companies, including the company I work for right now, JPL, have well-advertised success stories using ATM to support both multimedia (e.g., videoconferencing and CAD/CAM) and high throughput data applications. These visionary companies are redefining their respective industries with business applications enabled by ATM. Other enterprise network users are being driven to ATM solely to solve the bandwidth bottleneck in their campus backbones.

ATM protagonists have been barraged by criticism of the technology since its beginning. The challenge is to address these criticisms, and provide effective solutions. Although ATM provides the best price/performance when looking at the cost per bit per second, product costs must be lowered. ATM adopters are making significant gains in cost reduction and, as a result, ATM desktop solutions are now approaching the costs of switched Ethernet.

There is still room for improvement, however. ATM services must have reasonable tariffing. Carriers are reluctant to commit to public pricing until ATM is functioning in their networks and they have gained experience with customers in the ATM environment. Mainstream users must be convinced that ATM produces cost and performance benefits. Users must be assured of guaranteed interoperability of ATM products and services and they must understand the migration path.

Underlying these business challenges are the technical challenges—interoperability, interworking with legacy systems, and backward compatibility. ATM forum initiatives are making the public aware of ATM successes, ATM specifications are helping to guaranteed interoperability and simplifying migration paths, and ATM solutions are becoming more widespread and less expensive.

3. Network design. A discussion of ATM design can include network architecture, network elements, interconnection methods, internetworking devices, interfaces, performance issues, and public and private services that will be used to build the new network infrastructure.

Answer:

Implementation of video applications requires great networking speed. The network bandwidth requirements are an order of magnitude greater than for data alone. For example, a real-time, full-screen, full-motion color MPEG2 video stream requires from 4 to 60 Mbps of network bandwidth depending on the required quality level. This clearly exceeds the available bandwidth on most existing shared-media networks. For non-real-time image applications, digital video files can be in the 10- to 20 Mbytes range or as high as 100 Mbytes per minute of playback time depending on the compression scheme, window and frame size, and bit depth.

One of the most important factors affecting the quality of video delivery is end-to-end or absolute delay. While delays greater than 200 milliseconds are annoying, a delay greater than 400 msec is intolerable. Since most video coders/decoders (codecs) require between 50 and 250 msec to perform compression and decompression, the wide-area connection must introduce less than a 50 msec end-to-end delay to be considered usable for real-time video applications.

In addition to absolute delay, delay variation or "jitter" is also important in determining reliability in delivering real-time video. Jitter not only interferes with visual quality, but can also contribute to an irritating lack of synchronization between audio and video streams. While delay variations in excess of 500 microsec are considered annoying, variations in excess of 650 microsec are intolerable. A "video-enabled" network must deliver a continuous stream of data that arrives at the destination at a fixed rate, even as the network becomes heavily loaded with multiple users and video streams. Currently, only ATM offers the bandwidth guarantee and quality of service (QoS) required for real-time video applications.

ATM is emerging as the networking technology of choice for multimedia applications because it can support high bandwidth communications with low latency. In addition, ATM offers the capacity to statistically multiplex available bandwidth from one stream to another and one address to another. Although great progress has been made in recent years in the area of video compression standards, there are still many proprietary video-encoding techniques on the market. For example, many of the LAN-based videoconferencing applications use proprietary techniques that have been optimized to run well-suited for wide-area connections. MPEG2 is emerging as the leading standard for high quality video applications such as video broadcasting and video entertainment distribution. MPEG video/audio compression standards are defined by the ISO. While MPEG1 delivers VHS quality in 1.5 to 2.0 Mbps, MPEG2 can deliver to theater quality using from 4 to 60 Mbps. The MPEG2 encoding process requires sophisticated and expensive technology—an encoder can cost $50,000—and introduces a delay of hundreds of milliseconds. As a result, it is unlikely that MPEG2 will be used fro desktop videoconferencing in the near future.

To address the requirements of real-time video transmission over ATM, the ATM forum has endorsed using the real-time AAL5 variable bit rate (VBR) class of service, even through an MPEG2 stream actually consists of fixed, 188-byte packets running at a constant bit rate (CBR). Real-time VBR was chosen over CBR since MPEG2 already has its own time base, the Program Clock Reference, in its transport stream and does not need the time stamp provided in the AAL1 CBR class of service. Furthermore, AAL5 is more efficient than CBR since it has lower overhead, and the full 48 bytes per cell of payload are usable.

Using real-time VBR network connections, the proper amount of ATM network bandwidth can be allocated using QoS parameters to ensure reliable transmission of each video connection. Using real-time VBR, jitter and latency can be explicitly specified per user contract for each video connection, along with the allowable cell-loss and cell-error rates. Although video is very sensitive to jitter and latency, it is more tolerant of cell loss and errors since the decoding process can easily compensate for lost picture frames.

JPEG, a predecessor of MPEG, is a n International Telecommunication Union (ITU) encoding standard for still pictures that was adapted to handle video, resulting in Motion-JPEG. A compressed video stream actually consists of a sequence of JPEG pictures at a rate of 24 or 30 frames per second. At these rates, however, any picture is very much like the picture immediately preceding or following it, which produces less than optimal compression. A Motion-JPEG video stream occupies from 10 to 20 Mbps. MPEG, on the other hand, performs interframe compression by computing some frames from other frames to reduce the information needing to be sent.

Today, the ,majority of desktops in large enterprises are equipped with two types of communications terminals—the PC and the telephone. They are supported by two entirely separate network connections—the :LAN and the PBX—each providing a single service. The telephone network delivers bi-directional streams of data at 64 kbps (typically digitally encoded voice), while the LAN delivers packets of data on a best-effort basis. The technologies behind the telephone network and the LAN are very different. Each has evolved over the years—but both technologies continue to provide essentially the same, single service as they have always done.

From the beginning, ATM was designed to integrate voice, video, and data communications over a single network. Since early 1996, ATM has been able to provide practical, standardized solutions for data networking, including transparent interoperability with Ethernet and token-ring LANs, and support for existing LAN-based applications. Standards that allow voice and video applications to run over ATM are expected by the end of 1996. While it is clear that ATM is the best solution for integrating voice, video, and data throughout the enterprise, today's enterprise networks are based on older technologies such as Ethernet and token ring, typically linked by routers and running internetworking protocols such as Internet protocol (IP). IP originated as a pure data communications protocol, and was never designed to handle real-time voice and video. Upgrading router-based internetworks to carry voice and video requires radical surgery. Even then the quality of communication would never match that of the public telephone network. By contrast, ATM was designed from the outset to handle both real-time video and voice, and bursty data traffic. They are carried as payload directly in ATM cells. There is no need for IP to define network-layer addressing, since ATM has its own addressing scheme with a far larger address space than that of IP. ATM's support for point-to-multipoint switched virtual circuits (SVCs) also handles the requirements for multicasting voice and video far more elegantly and completely than IP's multicast protocol. An ATM network user can elect to receive any number of multicast streams, and receive only selected streams. ATM solves the multicast problem at Layer 2, and requires no Layer 3 protocol for multicasting voice and video.

4. Strategies for implementation. Strategies for implementation can include the delineation of procedures for installation, operation, maintenance, training, management, and security.

Answer:

The current computer/desktop communications focus is on data, but emerging applications include a much wider range of requirements that current solutions cannot meet. Multimedia, multi-application requirements with guaranteed quality of service and bandwidth will drive product and service offering toward ATM. Although equipment technical specs have been developed to implement ATM on the desktop, the "convergence" software or application program interfaces (APIs) that make applications aware of ATM are lagging behind. The ATM forum APIU group is working with other industry organizations to adapt their APIs to ATM based on the ATM forum's API specification—the Native ATM Service Semantic Description.

Although detractors have said that there really aren't any applications driving ATM demand, applications that demand bandwidth and/or quality of service for a mix of real-time (delay-sensitive) and non-real-time applications are emerging. These include medical imaging; CAD/CAM file transfer; interconnecting high-volume, geographically dispersed LANs; server farm support; ATM backbones to improve performance of IP intra- and internetworks, and enterprise networks. The pace will really start to pick up with products and services based on the ATM forum's Anchorage Accord.

As far as Private Enterprise Networks, historically this area exhibited the most urgent need and has seen the most dramatic changes. Many LAN switch/hub vendors and players in LAN and WAN markets have reported tremendous sales. More information about growing sales and applications is available from the ATM forum.

In the area of Public Networks, although it seems that the public carriers have been quiet on ATM, they've been planning and installing the needed network equipment before ATM services could be offered and waiting while regulatory changes were happening around the world. ATM-based services are becoming available via 14 carriers today in selected markets in the United States. Even more are becoming available globally. Services are primarily PVCs with SVC services starting later this year. Carriers have not disclosed many details of their plans, but carriers were waiting to get closure on telecom reform. Second, a large number of key ATM forum specifications have rather recently been completed or will be completed shortly. Why is this important? There were many gaps in the standards and implementation agreements that had to be addressed before ATM network solutions could be made available. The ATM forum is working diligently to fill these gaps.

Early video-on-demand trials to bring ATM to the home have been completed. Trail costs were high and user interest was weaker than service providers would have liked. Since then, use of the Internet has grown significantly. World Wide Web sites have become increasingly rich in still image and video animation. Internet users are increasingly frustrated downloading Web pages and need a bigger bandwidth pipe to the home. Enabling technologies like asymmetric digital subscriber line (ADSL) allow ATM to be brought to the home over the existing copper plant—thus breaking the bandwidth bottleneck for Internet access. These facilities can also support video-on-demand in the future. Meanwhile, technical work continues on defining the in-home ATM LAN (HAN)

ISPs are seeing significant growth in the numbers of residential and corporate customers, and many customers are demanding T1 and higher rate access to the Internet. ISPs are making significant investments to upgrade access capabilities and the aggregate backbone capabilities of their networks using ATM. For example, one leading provider just announced it will be upgrading its Internet backbone from 155 Mbps to 622 Mbps ATM over the next six months.

5. Description of the evaluation procedure and post-implementation review. Tactics for assessing the advantages, limitations, and constraints of the ATM networking configuration are noted.

Answer:

Performance, reliability, and productivity gains actually achieved by a company will determine the usefulness of the "tool" to network users. High quality is required before the tool becomes useful and produces the desired behavior change which improves the employees' productivity. Meanwhile, system reliability is critical. If video connection experience frequent dropouts during high network usage, then employees will be unlikely to depend on the tool or anything other than unimportant meetings. The underlying capabilities of the video transport switching system must support stringent requirements. This is key to the system's reliability. ATM can deliver the high bandwidth and low delay required by video collaboration, but it can only do so with the following capabilities:

Quality of Service Guarantees per Video Stream. The ATM wide-area switch should support traffic policing for individual video streams to ensure that each stream receives the required quality of service across the network. Jitter and latency should be explicitly specified per user contract for each video stream, along with the allowable cell-loss and cell-error rates. Per-stream traffic policing that relies upon a sophisticated queuing and buffer management architecture should be used.

Per-Video-Stream Buffering. The ATM wide-area switch should support per-video-stream buffing to ensure fairness to all video streams and to firewall one stream from another. Per-stream buffering allows the network to sustain extended bursts from one stream without affecting the level of performance delivered to other streams.

Multiple ATM Service Classes for Video. The ATM wide-area switch should support various service classes for video—from CBR with minimal cell delay variation to VBR with less stringent cell delay. While the ATM forum has endorsed using the real-time AAL5 variable bit rate class of service for real-time video streams, non-real-time transfer of video streams should utilize the available bit rate (ABR) service class to use network bandwidth efficiently. The ATM switch should allocate unused, spare bandwidth for ABR video files—bandwidth that is not needed by the VBR video streams. The ATM switch should maintain firewalls between each service class to ensure fairness and prevent misbehavior in one class from affecting the performance of another class.

Reliability and Serviceability. The ATM wide-area switch should be designed for maximum reliability through hardware redundancy. It should automatically reroute video streams around network failures to provide high network resiliency. All switch modules should be able to be remodeled and re-inserted without powering down the switch and without affecting the operation of other modules. The switch should have a software-based architecture which allows feature enhancements and bug fixes to be downloaded from a central site as a background task and then activated when desired. The switch should collect statistics for each video stream, provide open interfaces to this information, and continuously monitor resources performance.

The video content of communications will continue to increase as collaborative computing, videoconferencing, application-sharing, and remote training become commonplace. Rather than increasing the speed of existing networks to accommodate video, companies are turning to ATM because it can support high bandwidth, multimedia traffic with low latency. Thanks to the efforts of the ATM forum, standards are starting to emerge that address these requirements. Early use of multimedia ATM networks have demonstrated the benefits of ATM's efficiency in delivering a mixture of video, audio, and data over one homogeneous network.

References:

1. Decina, M., and Trecordi, V., editors. Special Issue on Traffic management and Congestion Control for ATM Networks. IEEE Network, September 1992.

2. Minet, P. "Broadband ISDN and Asynchronous Transfer Mode (ATM)." IEEE Communications Magazine, September 1989.

3. Prycker, M. (1991). Asynchronous Transfer Mode: Solution for Broadband ISDN. New York: Ellie Horwood.