Telecommunications - a Q&A topical approach - Part I:
An Introduction to the Computer Communications Environment

Task 1

1. Delineate the capabilities, advantages, and limitations of two computer communications networks you have worked with.

Answer: I worked with several computer communications networks over the years during my 20 years career. Among the first ones was a computer network back in the early 80's while I was a Sr. software engineer for Xerox Corp. in El Segundo, California. As I recall computer networks were at their early development stage at that time. Xerox was one of the pioneers in the invention and development of computer networks. I recall that at that time I used the parent of the traditional PC, a machine invented by Xerox engineers in the late 70s at their Palo Alto Research Center (PARC). The name of the machine was Star. It came with other great inventions from PARC's engineers, the mouse and graphical user interface, later one "copied" by Apple computers and then Microsoft, to name a few. At that time, as we all know another Xerox engineer, Dr. R. Metcalfe invented the famous Ethernet, a protocol running primarily (at that time) on a bus topology. Our Star machines (rather called workstations, another term coined by Xerox) were connected through twisted pairs (simple telephone wires), within the same building or even beyond that, running the famous Ethernet. At that time, it was such a thing and so exciting and unique this entire networking concept that we didn't care about the disadvantages: all were advantages, because very few others had a similar toy to communicate with. Now, years later, after I went through so many changes in networking technology I can see quite a few disadvantages, most of them common to Ethernet and twisted wires technology of that time: unreliable, not guaranteed message transmission, in some cases lost messages, speed and security issues.

Later on the IBM brought the PC, in 1981. And of course for couple of years not even a vague idea about networking. Then Novell's Netware in the mid 80's came in as a great invention. Short after that Banyan's Vines was another player, and IBM, and Microsoft. Even comparing them with the old networks based on Ethernet linking Star workstations, I've noticed (at least I remember) couple of advantages of Netware: a more robust network operating system, (of course it was first of all an operating system!), a different topology, ring and star were quite common. Later I moved along through more network technologies, and again looking back I see Novell's limitations (at least at that time). Mostly, related to speed, reliability, simplicity (I should say lack of real features beyond some basic primitives that allowed sharing common devices such as printers).

2. Why is response time a critical factor in end user productivity?

Answer: Response time is a critical factor in end user productivity because when a user initiates an activity such as sending a message to another machine, or trying to get data from a remote location, or just connecting to a near by or a remote device, getting the connection established, or getting the data is looking for, or getting the answer that the sent message or data is completed helps the user's productivity. Otherwise, if the user is hanged up (his or her PC, workstation or resources) waiting for the specified transaction to occur, creates an impact on his or her productivity, of having things done. The situation can be compared to "old" days when just sending a large document to the printer, the entire machines was stalled, being occupied with the processing of the file, and using its memory and CPU resources to have the file printed out. These days, this should not be an issue. The same file, after a quick pre-processing in the user's machine is sent to the printer queue, where is processed locally, the printer being a PC in miniature, with its own resources, freeing therefore the user's resources, and increasing his or her productivity in whatever tasks is concern with.

3. Discuss communications implications of DDP in your work environment.

Answer: I assume that the DDP stands for Distributed Data Processing. In my work environment there is a truly distributed environment were the mainframe computing resources are shared among not just the local users, the corporate offices for a variety of tasks, but also remotely by other users we have dispersed around the country in other seven states where our branches are located. Without a computer network technology in place, this real time sharing of computing resources for business purposes will not be possible. If just surface mail, paper-based business, and/or telephone will be the alternative, the business transactions, decision making and all other related issues will take days or hours at best, instead of almost instantaneously . This later situation will put us out of business almost immediately, if the rest of competition would run an electronic computer network environment. We run high speed T1 communications lines, and use a DECnet protocol that links over 800 terminals and printers around the country with several clusters of alpha processor-based DEC's VAXs. Connected to this WAN are also PC's or PC LANs running either Pathworks or Novell's Netware. This entire environment produces a high speed, high reliability environment that allows for efficient business transactions among company's various departments, employees, or remote branches around the country.

4. Describe how the following impairments could affect quality and transfer in your corporate setting:

a. Attenuation

b. Delay distortion

c. Noise

Answer:

a. As we know the strength of a signal falls off with distance over any transmission medium. For guided media, this reduction in strength, or attenuation, is generally logarithmic and thus is typically expressed as a constant number of decibels per unit distance. For unguided media, attenuation is a more complex function of distance and the makeup of the atmosphere. Attenuation introduces three considerations for the transmission engineer. First, a received signal must have sufficient strength so that the circuitry in the receiver can detect and interpret signal. Second, the signal must maintain a level sufficiently higher than noise to be received without error. Third, attenuation is an increasing function of frequency. The first and second problems are dealt with by attention to signal strength and the use of amplifiers or repeaters. The third problem is particularly noticeable for analog signals. In any case, attenuation is NOT a problem in my corporate setting. The design of our computer networks has built in capability to deal successfully with the potential attenuation of signals. We use guided media and repeaters are introduced at precisely calculated distance to sustain a correct signal.

b. Delay distortion is a phenomenon peculiar to guided transmission media. The distortion is caused by the fact that the velocity of propagation of a signal through a guided medium varies with frequency. For a band limited signal, the velocity tends to be highest near the center frequency and fall off toward the two edges of the band. Thus various frequency components of a signal will arrive at the receiver at different times. This effect is referred to as delay distortion, since the received signal is distorted due to variable delay in its components. Delay distortion is particularly critical for digital data. In my corporate setting we do not encounter a delay distortion problem, again, because the network design engineers took in consideration this factor when they built and continuously maintain our corporate network. Even more, from a user perspective, what I consider myself part of I have not noticed anything that looks like a delay distortion.

c. For any data transmission event, the received signal will consist of the transmitted signal, modified by the various distortions imposed by the transmission system, plus additional unwanted distortions that are inserted somewhere between transmission and reception. The latter, undesired signals are referred as noise. It is noise that is the major limiting factor in communications system performance. Noise may be divided into four categories: thermal noise - due to thermal agitation of electrons in a conductor; inter modulation noise - produced when there is some nonlinearity in the transmitter, receiver, or intervening transmission system; crosswalk - experienced while using the telephone you can hear another conversation, what means an unwanted coupling between signal paths, and impulse noise - consisting of irregular pulses or noise spikes of short duration and of relatively high amplitude. Again, in my corporate setting, as a user I rarely noticed the above mentioned effects what characterize the noise. Maybe except for telephone conversations, in rest of cases, when about data transmission, noise is still a rare encountered situation

5. Discuss the key characteristics of the following transmission media:

a. Twisted pair

b. Coaxial cable

c. Optical fiber

Answer:

a. A twisted pair consists of two insulated wires arranged in a regular spiral pattern. A wire pair acts as a single communication link. Typically, a number of these pairs are bundled together into a cable by wrapping them in a tough protective sheath. Over longer distances, cables may contain hundreds of pairs. The twisting of the individual pairs minimizes electromagnetic inferences between the pairs. The wires in a pair have thicknesses of from 0.016 to 0.036 in. By far the most common transmission medium for both analog and digital data is twisted pair. It is the backbone of the telephone system as well as the workhorse for intrabuilding communications. In the telephone system, individual telephone sets are connected to the local telephone exchange or "end office" by twisted-pair wire. These are referred to as "local loops." Within an office building, telephone service is often provided by means of a private branch exchange (PBX). For both the systems, twisted pair has primarily been a medium for voice traffic between subscribers and their local telephone exchange office. Digital data traffic can also be carried over moderate distances. For modern digital PBX systems, data rates of about 64 kbps are achievable using digital signaling. Local loop connections typically require a modem, with a maximum data rate of 9600 bps. However, twisted pair is used for long-distance trunking applications and data rates of 4 Mbps or more may be achieved.

b. Coaxial cable, like twisted pair, consist of two conductors, but it is constructed differently to permit it to operate over a wider range of frequencies. It consists of a hollow outer cylindrical conductor which surrounds a single inner wire conductor. The inner conductor can be either solid or stranded; the outer conductor can be either solid or braided. A single coaxial cable has a diameter of from 0.4 to about 1 in. Coaxial cable has been perhaps the most versatile transmission medium and is enjoying increasing utilization in a wide variety of applications. The most important are: long-distance telephone and television transmission; television distribution; local area networks; and short-run system links. Using frequency-division multiplexing, a coaxial cable can carry over 10,000 voice channels simultaneously. Cable is also used for log-distance television transmission. An equally explosive growth area for coaxial cable is local area networks. Finally, coaxial cable is commonly used for short-range connections between devices.

c. An optical fiber is a thin (2 to 125 micrometer), flexible medium capable of conducting an optical ray. various glasses and plastics can be used to make optical fibers. The power losses have been obtained using fibers of ultrapure fused silica. An optical fiber cable has a cylindrical shape and consists of three concentric sections: the core, the cladding, and the jacket. One of the most significant technological breakthroughs in data transmission has been the development of practical fiber optic communications systems. The characteristics that distinguishes optical fiber from twisted pair and coaxial cable are: greater bandwidth, smaller size and lighter weight, lower attenuation, electromagnetic isolation, and greater repeater spacing. The five basic categories of application that have become important for optical fiber are: long-haul trunks; metropolitan trunks; rural exchange trunks; local loops, and local area networks.

Task 2

1. What are the major differences between digital and analog transmission? Why are computer networks evolving towards increased digitization?

Answer: In transmitting data from a source to a destination, one must be concerned with the nature of the data, the actual physical means used to propagate the data, and what processing or adjustments may be required along the way to assure that the received data are intelligible. For all these considerations, the crucial point is whether we are dealing with analog or digital entities. The term analog and digital correspond, roughly to continuous and discrete respectively. These two terms are used frequently in data communications in at least three contexts: Data, Signaling, and Transmission.

Analog data take on continuous values on some interval. For example, voice and video are continuously varying patterns of intensity. Most data collected by sensors, such as temperature and pressure, are continuous-valued. Digital data take on discrete values; examples are text and integers. The most familiar example of analog data is audio or acoustic data, which in the form of sound waves, can be perceived directly by the human beings. In a communication system, data are propagated from one point to another by means of electric signals. An analog signal is a continuously varying electromagnetic wave that may be propagated over a variety of media, depending on the spectrum. A digital signal is a sequence of voltage pulses that may be transmitted over a wire medium; for example, a constant positive voltage level may represent binary 1 and a constant negative voltage level may represent binary 0.

Computer networks evolve towards increased digitization due to more advantages these type of signal transmission have over the analog signal. For once is less need for conversion from digital (the encoding form of data inside the computers) to analog (the form data is transmitted over the media). Also, the improved types of media, beyond classical twisted wires allow more data to be transmitted at higher speeds and less interference (noise) or need for signal sustaining (to avoid attenuation).

2. Highlight the main attributes of asynchronous and synchronous transmission. Describe a use for each technique.

Answer: Two approaches are common for achieving the desired synchronization. The first is called asynchronous transmission. The strategy with this scheme is to avoid the timing problem by not sending long, uninterrupted streams of bits. Instead, data are transmitted one character at a time, when each character is five to eight bits in length. Timing or synchronization must only b maintained within each character; the receiver has the opportunity to resynchronize at the beginning of each new character.

A more efficient means of communication is synchronous transmission. In this mode, blocks of characters or bits are transmitted without start and stop codes, and the exact departure or arrival time of each bit is predictable. To prevent timing drift between transmitter and receiver, their clocks must somehow be synchronized. One possibility is to provide a separate clock line between transmitter and receiver. Otherwise, the clocking information must be embedded in the data signal. For digital signals, this can be achieved with biphase encoding. For analog signals, a number of techniques can be used; the carrier frequency itself can be used to synchronize the receiver based on the phase of the carrier.

Asynchronous communications simple and cheap but requires an overhead of two to three bits per character. Of course, the percentage overhead could be reduced by sending larger blocks of bits between the start and stop bits. But in this case the synchronous transmission is far more efficient than asynchronous. Therefore the choice between synchronous and asynchronous is based on the amount of data that needs to be transmitted. The larger the block of data the more efficient is the synchronous mode over the asynchronous.

3. What is a protocol?

Answer: The concepts of distributed processing and computer networking imply that entities in different systems need to communicate. We use the term "entity" and "system" in a very general sense. Examples of entities are user application programs, file transfer packages, data base management systems, electronic mail facilities, and terminals. Examples of systems are computers, terminals, and remote sensors. In general, an entity is anything capable of sending or receiving information, and a system is a physically distinct object that contains one or more entities.

For two entities to successfully communicate, they must "speak the same language." What is communicated, how it is communicated, and when it is communicated must conform to some mutually acceptable set of conventions between the entities involved. This set of conventions is referred to as a protocol, which may be defined as a set of governing the exchange of data between two entities. The key elements of a protocol are: Syntax - includes such things as data format, coding, and signal levels; semantics - includes control information for coordination and error handling; and timing - includes speed matching and sequencing.

4. Clarify the need for flow control from the point of view of your work environment.

Answer: Flow control is a technique for assuring that a transmitting station does not overwhelm a receiving station with data. The receiver will typically allocate a data buffer with some maximum length. When data are received, it must do a certain amount of processing before it can clear the buffer and be prepared to receive more data. In the absence of flow control, the receiver's buffer may overflow while it is processing old data.

The simplest form of flow control, is knows as stop-and-wait flow control. A source entity transmits a frame. After reception, the destination entity indicates its willingness to accept another frame by sending back an acknowledgment to the frame just received. The source must wait until it receives the acknowledgment before sending the next frame. The destination can than stop the flow of data by simply withholding acknowledgment. This procedure works fine and, indeed, can hardly be improved upon when a message is sent in a few large frames. However, it is the case that a source will break up a large block of data into smaller blocks and transmit the data in many frames. With the use of multiple frames for a single message, the simple procedure described above may be inadequate. Therefore, more advanced techniques, such as sliding window flow control are implemented.

In may organization, as an end-user of computer networks, this level of detail is transparent to us. However, we, the end-user, notice fluctuations in the rate the data is transmitted to us or sent to its destination from our workstations, at given times of the day. Even establishing connections to another server varies greatly during business hours. Parameters such as file size or how remote is the server we try top connect play a role in the flow control of data transmission form one point to the other. And of course, the explanation is that given above.

5. Indicate the main functions of error detection and error control techniques.

Answer: Error control refers to mechanism to detect and correct error that occur in the transmission of frames. Data are sent in a sequence of frames; frames arrive in the same order in which they are sent; and each transmitted frame suffers an arbitrary and variable amount of delay before reception. In addition, we admit the possibility of two types of errors: Lost Frame - a frame fails to arrive at the other side (a noise burst may damage a frame to the extent that the receiver is not aware that a frame has been transmitted), and, Damaged Frame - a recognizable frame does arrive but some of the bits are in error (have been altered during transmission).

The most common techniques for error control are based on some or all of the following ingredients:

1. Error detection - typically a CRC (cyclic redundancy check) is used;

2. Positive acknowledgment - the destination returns a positive acknowledgment to successfully received, error-free frames;

3. Retransmission after timeout - the source retransmits a frame that has not been acknowledged after a predetermined amount of time;

4. Negative acknowledgment and transmission - the destination returns a negative acknowledgment to frames in which an error is detected.

Collectively, these mechanisms are all referred to as automatic repeat request (ARQ). Three versions of ARQ are in common use:

1. Stop-and-wait ARQ - that is based on the stop-and-wait flow control techniques outlined previously;

2. Go-back-N ARQ - a variant of continuos ARQ, where a station may send a series of frames determined by window size, using the sliding window flow control technique;

3. Selective-reject ARQ - where the only frames retransmitted are those that receive a NAK or which time out.

Task 3

1. Describe the need for transmission efficiency in your organization.

Answer: As mentioned in one of the above answers, my organization is a nation-wide distributed computer network environment. Most of the operations are business transactions. The rest of them are standard type of reporting, for both business and financial sides. Therefore, the efficiency of transmission is a key element as a measure of effectiveness of organization's business. Taking for example the business transactions, like those that related to customer's inquiries of their membership status or other related information, a prompt response is essential. Also, the number of customer representatives whose job is to answer questions from the customers, is direct proportional with the speed and correctness (efficiency) of the entire computer network based applications. If the efficiency would degrade, then to satisfy on average the same number of customers would require an increased number of customer representatives that would translate into an increase cost of doing business. In conclusion, a highly efficient computer network will imply an efficient transmission of data and other information over our organization wide area network (WAN).

2. How can compression techniques facilitate information transport?

Answer: Data compression has become an important topic for network administrators as multimedia, video, document imaging, and other technologies emerge. Data compression basically squeezes data so it requires less disk space for storage and less time to transmit during a file transfer. Compression takes advantages of the fact digital data contains a lot of repetition. It replaces repeating information with a symbol or code that represents the information in less space. Basic compression techniques are:

1. Null compression - replaces a series of blank spaces with a compression code, followed by a value that represents the number of spaces.

2. Run-length compression - expands on the null compression techniques by compressing any series of four or more repeating characters.

3. Key-word encoding - creates a table with values that represent common sets of characters.

4. Huffman statistical method - in which it's assumed there is a varied distribution of characters in the data, i.e., in other words, some characters appear more than others.

Because compression algorithms are software-based, overhead exists that can cause problems in real-time environments, but compression during backup and archiving of files usually poses few problems. The use of high-performance systems can help eliminate most of the overhead and performance problems. Another consideration is that compression removes portability from files unless the decompression software is shipped with the files.

Compression provides a way to improve throughput on wide area links. In the case where you need to make a decision as to whether a link requires an inexpensive dial-up line connected by modems, or a more expensive dedicated connection, a modem that has data compression features might provide the added throughput you need to decide on the cheaper solution. If full-time connections are required, data compression can help you to get the most out of those connections, as well, however, with limitations.

3. Discuss the role of multiplexing in expediting information transmission.

Answer: With two devices connected by a point-to-point link, it was felt to be desirable to have multiple frames outstanding so that the data link not become a bottleneck between the stations. Transmission facilities are, by the large, expensive. It is often the case that two communicating stations will not utilize the full capacity of a data link. For efficiency, it should be possible to share that capacity. The generic term for such sharing is multiplexing.

A simple example of multiplexing is the multidrop line. Here a number of secondary devices (e.g., terminals) and a primary (e.g., host computer) share the same line. This has several advantages: the host computer needs only one I/O port for multiple terminals, and, only one transmission line is needed. These types of benefits are applicable in other contexts. In long-haul communications, a number of high-capacity fiber, coaxial, terrestrial microwave, and satellite facilities have been built. These facilities can carry large numbers of voice and data transmissions simultaneously using multiplexing.

There are three types of multiplexing techniques. The first, frequency-division multiplexing (FDM), is the most widespread and is familiar to anyone who has ever used a radio or television set. The second is a particular case of time-division multiplexing (TDM) often known as synchronous TDM. This is commonly used for multiplexing digitized voice streams. The third type seeks to improve on the efficiency of synchronous TDM by adding complexity to the multiplexer.

Generically, the multiplexing function is like this: there are n inputs to a multiplexer. The multiplexer is connected by a single data link to a demultiplexer. The link is able to carry n separate channels of data. The multiplexer combines (multiplexes) data from the n input lines and transmits over a higher-capacity data link. The demultiplexer accepts the multiplexed data stream, separates (demultiplexes) the data according to channel, and delivers them to the appropriate output lines.

4. Explain the need for standards in the telecommunications arena.

Answer: It has long been accepted in the communications industry that standards are required to govern the physical, electrical, and procedural characteristics of communication equipment. In the past, this view has not been embraced by the computer industry. Whereas communication equipment vendors recognize that their equipment will generally interface to and communicate with other vendors' equipment, computer vendors have traditionally attempted to monopolize their customers. The proliferation of computers and distributed processing has made an untenable position. Computers from different vendors must communicate with each other and, with the ongoing evolution of protocol standards, customers will no longer accept special-purpose protocol conversion software development. The result is that standards now permeate all of the areas of technology.

There are a number of advantages and disadvantages to the standards-making process. The principal advantages of standards are:

1. A standard assures that there will be a large market for a particular piece of equipment or software. This encourages mass production, and in some cases, resulting in lower costs.

2. A standard allows products from multiple vendors to communicate, giving the purchaser more flexibility in equipment selection and use.

The principal disadvantages are:

1. A standard tends to freeze the technology. By the time a standard is developed, subjected to review and compromise, and promulgated, more efficient techniques are possible.

2. There are multiple standards for the same thing. This is not a disadvantage of standards per se, but of the current way things are done. Fortunately, in the recent years the various standards-making organizations have begun to cooperate more closely. Nevertheless, there are still areas where multiple conflicting standards exist.

Task 4

1. What is ATM? Discuss expected benefits of ATM implementation for supporting multimedia applications in your workplace.

Answer: ATM (Asynchronous Transfer Mode) is a data transmission technology that has the potential to revolutionize the way computer networks are built. Viable for both local and wide area networks, this technology provides high-speed data transmission rates and supports many types of traffic including voice, data facsimile, real-time video, CD-quality audio, and imaging. The carries such as AT&T and US Sprint are already deploying ATM over a wide area and offering multimegabit data transmission services to customers. Over the 1994 to 1995 time frame, ATM products emerged from almost every hardware vendor to provide ATM routers and ATM switches that connect to carrier ATM services for building enterprise-wide global networks; ATM devices for building internal private backbone networks that interconnect all the local area networks (LANs) within organizations; and, ATM adapters and workgroup switches for bringing high-speed ATM connections to desktop computers that run emerging multimedia applications.

My organization does not have any specific immediate plans for ATM implementation, in particular as related to multimedia. My particular organization is rather contemplating moving toward doing business over the Internet (Web) - electronic commerce. However, when ATM will be initially deployed as a wide-area technology to improve transmissions rates outside the local area networks, ATM technology will eventually be cost-effective for in-house networks. Meanwhile, fats Ethernet technologies and switching hubs may be preferable and more cost-effective. On the other hand, for example, IBM has invested over $100 million per year in the development of ATM products, including its own ATM chip set. The product line includes ATM interface cards for personal computers and desktop systems, as well as ATM hubs, all of which should be available anytime now.

Organizations considering a migration to ATM should follow a step-by-step approach employing a hierarchical distributed wiring structure. You can build ATM backbone topologies in a number of ways. ATM is not tied to one specific topology like Ethernet and FDDI. While the hierarchical star will most likely predominate, other topologies are possible if necessary.

2. Describe the advantages and limitations of outsourcing as a strategy for facilitating network development in your corporate setting. Support your discussion with some examples.

Answer: Outsourcing, as a general concept, has been very popular since late 80s, when an economic downturn started. It has been associated with other concepts such as downsizing, rightsizing, etc. In the case of network development, the outsourcing concept it probably makes more sense than in many others. First of all network planning, design, and implementation, is an area that requires highly specialized technical skills. Unless the case is of a large corporation, training and hiring this highly specialized team of network professionals is not a cost effective decision. More than that, the computer networks discipline is so dynamically changing that a continuous, sustaining investment is required to keep pace with the technology trends.

In my corporate setting, which is a small to medium size company (500 employees), we keep a very small, dedicated staff that plan, design, implement and maintain our computer networks (aprox. 4 full time employees). And as said in couple of places in this paper, we run a simple and efficient, not state of the art, distributed computer environment. Hot technologies, such as frame relay, FDDI, ATM, or alike are not implemented yet, and are very general plans, not specific enough to be followed through at this point. If a decision to pursue such technology will be made in the near future, giving the size and business conditions in my corporate setting, I assume that a combination of outsourcing and existing personnel will be involved. Needed expertise, at least for planning and design phase will be needed. Then out full time personnel will be trained and work on the implementation and especially maintenance of these new technologies.

On the other, it can also be seen the disadvantages of outsources, whereas specific knowledge is not kept and further utilize in the company, and the need to be brought back, when problems arise may translate in a long run in a more expensive, non-cots effective strategy. Again, as above said, I think that that larger the company the less desirable is the need to outsource networking needs of a company.

Task 5

1. Highlight the key capabilities of the following wireless technologies from the point of view of your work environment:

a. Spread-spectrum

Answer: Wireless communications falls into the two categories: wireless local area network (LAN) communications, and wireless mobile computing. The primary difference between the two is found in the transmission facilities. Wireless LAN communication uses transmitters and receivers that are located within a company's premises and owned by that company. Wireless mobile computing involves telephone carries or other public services to transmit and receive signals using packet-radio, cellular networks, and satellite stations for users who are out of the office and on the road. The spread-spectrum radio is a technique that broadcasts signals over a wide range of frequencies, avoiding problems inherent in narrow-band communication. A code is used to spread the signal, and the receiving station uses the same code to retrieve it. Coding allows the spread-spectrum signal to operate in a wide range of frequencies, even if other spread-spectrum signals are in the same range. Spread-spectrum radio does not interfere with conventional radio because its energy levels are too weak. Transmission speeds are in the 250 Knits/sec range.

b. Infrared communications

Answer: This method offers a wide bandwidth that transmits signals at extremely high rates. Infrared light transmissions operate by line of sight, so the source and receiver must be aimed at or focused on each other, similar to a television remote control. Obstructions in the office environment must be considered, but mirrors can be used to bend infrared light if necessary. Because infrared light transmissions are susceptible to strong light from windows or other sources, systems that produce stronger beams might be necessary. Typical transmission speeds range up to 10 Mbits/sec.

c. Satellite

Answer: This technique is similar to a broad cast from a radio station. You tune in to a "tight" frequency band on both the transmitter and receiver. The signal can penetrate walls and is spread over a wide area, so focusing is not required. However, narrow-band radio transmissions have problems with radio reflections (ghosting) and are regulated by the FCC. They must be precisely tuned to prevent interference from other frequencies. Transmission speeds are in the 4,800 Kbits/sec range.

2. Discuss an approach for integrating a wireless LAN with a wired LAN.

Answer: The wireless LAN and wired LAN have things in common, but also capabilities that differentiate them. For example, a wireless LAN provides:

1. Flexibility; it creates an entethered environment for the user. On the other hand, a wired LAN has flexibility, but not at the extent the wireless can provide.

2. Cost-effectiveness; it is less expensive in the long-run for dynamic environments. The wired LANs are also cost-effective in long run, but the wireless may have an edge here.

On the other hand, a wireless LAN has disadvantages such as:

1. It's slower; it generally has lower transmission rates than a wired LAN.

2. Its more expensive to install initially.

3. It's susceptible to interference or limited in distance, depending on the technology; therefore, a wired LAN has an edge here.

Taking in consideration the above integrating a wired LAN and a wireless LAN can prove a very advantages combination that nicely complement one each other. Of course the approach would be to start with a wired LAN, in place in many corporate settings these days. And build upon this one step at the time wireless LAN extensions that can integrate with existing functionality.