Broadband Wireless Access & Local Networks: Mobile Wimax and Wifi · Read more · WiMax Operator's Manual Building Wireless Networks. Read more . kaz-news.info Page i Thursday, September 15, PM WiMax Operator's Manual Building Wireless Networks (Second Edition) □□□. WiMAX. Handbook. Building Wireless Networks. Frank Ohrtman. McGraw -Hill. New York Chicago San Francisco Lisbon. London Madrid Mexico City.
|Language:||English, Indonesian, Arabic|
|ePub File Size:||19.56 MB|
|PDF File Size:||10.65 MB|
|Distribution:||Free* [*Registration Required]|
WiMAX is a IEEE broadband wireless standard for IEEE Wireless 10 Micro-cellular LoS Dimensioning a – UBI, main building e – Police Station b – UBI, 25 Planning Tools: Manual versus Automatic Planning Manual Planning What. About the Tutorial. WiMAX is one of the hottest broadband wireless technologies around today. It is based on. IEEE specification and it is expected to deliver high quality broadband services. two buildings having different networks. In addition to, the Mobile WiMAX version can give the full specifications is not enough to build an interoperable broadband wireless network and it Daniel Sweeney; WiMAX Operator's Manual: Building Wireless.
Wireless networks. Wireless Networks. The Handbook of Ad hoc Wireless Networks. The handbook of ad hoc wireless networks. Hotspot Networks: WiFi for Public Access Locations. Cognitive Wireless Communication Networks. Wireless Networks for Dummies. Smart Wireless Sensor Networks. Cooperative Cellular Wireless Networks. Wireless Sensor Networks. IP in Wireless Networks. Advanced Wireless Networks. Heterogeneous Wireless Access Networks. Wireless Broadband Networks.
Sustainable Wireless Sensor Networks. Wireless Mesh Networks. Recommend Documents. The latter will allow the network operators to mix base stations and subscriber premises equipment from different manufacturers so as not to be dependent on single sourcing and, perhaps more important, to encourage the mass production of standards-based chipsets by competing manufacturers.
This in turn will lead to a drop in equipment prices because of economies of scale and market pressures. In the past, the high prices of carrier-grade wireless base stations and subscriber terminals have saddled network operators with unacceptable equipment costs, and such costs, coupled with the disappointing performance of first-generation products, severely hindered wireless network operators attempting to compete with wireline operators.
The In fact, most of the activity involving Unlike the standards governing WLANs namely, In fact, the lack of stated rates is entirely appropriate to a standard intended for a public service provider because the operator needs to have the flexibility of assigning spectrum selectively and preferentially and of giving customers willing to pay for such services high continuous bit rates at the expense of lower-tier users—and conversely throttling bandwidth to such lower-tier users in the event of network congestion.
In a public network, the operator and not the standard should set bit rates such that the bit rates are based on business decisions rather than artificial limits imposed by the protocol. Introducing the Media Access Control Layer The media access control layer refers to the network layer immediately above the physical layer, which is the actual physical medium for conveying data.
The access layer, as the name suggests, determines the way in which subscribers access the network and how network resources are assigned to them. The media access control layer described in the The lower-frequency bands also support mesh topologies, although the mesh standard adopted by the Chapter 3 fully explains these terms.
In general, the older circuit-based services represent inefficient use of bandwidth, an important consideration with wireless where bandwidth is usually at a premium.
Moreover, they put the wireless broadband operator in the position of having to compete directly with the incumbent wireline telephone operator. Wireless insurgents attempting to vie for circuit traffic with strong, entrenched incumbents have been almost uniformly unsuccessful for reasons Chapter 6 will fully explore. A few words about the circuit and quasi-circuit protocols: A circuit transmission is one in which a prescribed amount of bandwidth is reserved and made available to a single user exclusively for the duration of the transmission; in other words, the user occupies an individual channel.
In a packet transmission, a channel is shared among a number of users, with each user transmitting bursts of data as traffic permits. A T1, the American standard, consists of 24 copper pairs, each capable of a throughput speed of 64 kilobits per second Kbps. E1 consists of 30 pairs and is commensurately faster. E1 is the standard offering in most countries outside the United States.
Both services go through ordinary telephone switches to reach the subscriber. ATM is a predominantly layer-2 the switching layer protocol developed in large part by Bellcore, the research arm of the Bell Operating Companies in the United States.
Intended to provide a common platform for voice, data, and multimedia that would surpass the efficiency of traditional circuit networks while providing bandwidth reservation and quality-of-service QoS mechanisms that emulate circuit predictability, ATM has found its place at the core of long-haul networks where its traffic-shaping capabilities have proven particularly useful.
In metropolitan area networks it is chiefly used for the transportation of frame-relay fast-packet business services and for the aggregation of DSL traffic. Because ATM switches are extremely expensive and represent legacy technology, I do not recommend using ATM as a basis for the service network, unless, of course, the wireless network is an extension of an existing wired network anchored with ATM switches.
Modulation and coding schemes may be adjusted individually for each subscriber and may be dynamically adjusted during the course of a transmission to cope with the changing radio frequency RF environment. The orthogonal frequency division multiplexing OFDM modulation scheme is specified for the lower band with a single carrier option being provided as well.
Chapter 4 discusses these terms. Polling on the part of the subscriber station is generally utilized to initiate a session, avoiding the simple contention-based network access schemes utilized for WLANs, but the network operator also has the option of assigning permanent virtual circuits to subscribers—essentially reservations of bandwidth.
WiMax Operator's Manual Building Wireless Networks
Provisions for privacy, security, and authentication of subscribers also exist. Advanced network management capabilities extending to layer 2 and above are not included in the standard. Introducing the Two Physical Standards The Higher-frequency transmissions, on the other hand, must meet strict line-of-sight requirements and are usually restricted to distances of a few kilometers. These conduce to high levels of robustness and higher spectral efficiencies—that is, more users per a given allocation of bandwidth.
The singular advantage enjoyed by users of higher-frequency bands is an abundance of bandwidth. Most spectral assignments above 20GHz provide for several hundred megahertz minimally, and the 57GHz to 64GHz unlicensed band available in the United States can support several gigabits per second at one bit per hertz for fiberlike speeds. Introducing WiMAX Standards are of relatively little value unless there is some way of enforcing compliance to the standard. Promoters of WiMAX also promotes the These are specific implementations, selections of options within the standard, to suit particular ensembles of service offerings and subscriber populations.
At the time of this writing, the WiMAX has not certified any equipment designed according to the WiMAX itself expects that some products will be certified by the end of , but this is only an estimate. For this reason, the Currently WiMAX certification of at least some of these products will follow in Most industry observers believe that incorporation of first-generation chips in products will take place on a fairly small scale and that radio manufacturers are awaiting the finalization of the I anticipate that low-priced Table compares the two standards in detail.
Optimized for indoor and campus environments, In fact, transmission speed and signal integrity drop off precipitately at distances beyond about feet from an access point. So why, given the intentional limitations inherent in the standard, would anyone contemplate employing In a word, price. Specifically, Simply put, a network constructed of If the network consists of nothing but short cell radius hotspots, A few manufacturers such as Tropos, Vivato, and Airgo are attempting to manufacture adaptive-array antenna systems or mesh-networked base stations for When a service provider is attempting to serve a number of small businesses in a single building where the subscribers lack networking capabilities of their own, enhanced performance No one should be tempted to believe that an entire metropolitan market can be served with At the same time, The standard has been subject to continuous revision ever since it was introduced in , and it has definitely not solidified as yet.
Further revisions of the standard are in preparation, and some of these could render further generations of Of particular interest is the proposed The standard endows The standard calls for prioritization of different traffic types and also allows for contention-free transmissions to take place over short durations of time, a provision that would significantly reduce latency.
But because Rumor has it that the IEEE will ratify Its appearance in silicon would probably take place a year or so later. Also of considerable interest is Projected speeds are in excess of Mbps. Two variants are currently in contention: Intel, it should be noted, has not previously been a player in the wireless fidelity WiFi space, and by devising a new The new standard will definitely make use of multiple input, multiple output MIMO technology, where arrays of antennas are required for both base stations and subscriber terminals.
Ratification is expected to take place in late Incidentally, many manufacturers are discussing a standard beyond Achieving such throughputs over any unlicensed band currently accessible to Finally, a mesh standard named What effect this will have on the positioning of Bluetooth has been used in a few hotspot public networks, but the range is so short—no more than 50 yards or so—that it is utterly inapplicable in pervasive MANs.
Also, work is under way on the formulation of a substandard within UWB may well represent the far future of broadband wireless, but current power restrictions confine it to very short ranges, just as with Bluetooth, and it is not suitable for overarching MANs as it is currently configured.
As such, the new standards-based equipment enables broadband wireless networks to perform at a level that was unattainable previously and extends the capabilities of wireless access technologies to permit the penetration of markets where previously wireless broadband was marginal or simply ineffective. Broadband wireless is still not the best access technology for all geographical markets or all market segments within a given geography, but many more customers are potentially accessible than in the past.
It is scarcely an exaggeration to say that the new standards provide an effective solution to the most severe geographical limitations of traditional broadband wireless products, though the reach of any given wireless network is still constrained by its location, and its attractiveness is affected by the presence or absence of competing broadband technologies. The most difficult geographical markets for wireless broadband remain large cities, especially where high-rises predominate in the downtown business district.
In the developed world the largest cities are already fairly well served by fiber for the most part, and fiber, where it is present, is a formidable competitor. The largest business buildings housing the most desirable customers will usually have fiber drops of high-speed fiber rings encircling the city core, and individual subscribers can download OC-3 Mbps , OC Mbps , or, in some cases, wavelength services variously 1Gbps or 10Gbps.
Generally, such customers are lost to wireless service providers because the availability the percentage of time that a link is available to the user of the radio airlink will always be less than for fiber, and availability is critically important to most downloadrs of high-bandwidth data services.
Also, you cannot discount the generally unfavorable topography represented by most large modern metropolises. Millimeter microwave transmissions demand a clear path to the subscriber terminal, and unless the base station resides on a tower that is considerably higher than any other structure in the vicinity, many promising buildings are apt to remain out of reach within the cell radius swept by the base station.
Whatever part of the spectrum one chooses to inhabit, wireless broadband is hard to employ in large cities with a lot of tall buildings. Sometimes a wireless link makes sense, however, which is covered in later chapters. Wireless broadband has been deployed with greater success in smaller cities and suburbs, both because the markets are less competitive and because the geography is generally more favorable.
The first point is fairly obvious; secondary and tertiary markets are far less likely to obtain comprehensive fiber builds or even massive DSL deployments because the potential customer base is relatively small and the cost of installing infrastructure is not commensurately cheaper. I will qualify the second point, however. Nevertheless, such environments still present challenges, particularly when millimeter microwave equipment is used.
Indeed, I know of no instance where millimeter wave equipment has been successfully deployed to serve a residential market in a suburban setting. Lower-microwave equipment is much better suited to low-density urban and suburban settings, and thus it will receive more attention in the chapters that follow; however, where equipment is restricted to line-of-sight connections, a substantial percentage of potential subscribers will remain inaccessible in a macrocellular large-cell network architecture—as many as 40 percent by some estimates.
Advanced NLOS equipment will allow almost any given customer to be reached, but, depending on the spectrum utilized by the network operator and the area served by a base station, coverage may still be inadequate because of range and capacity limitations rather than obstructions. Unquestionably, the new NLOS equipment will permit the network operator to exploit the available spectrum far more effectively than has been possible with first-generation equipment with its more or less stringent line-of-sight limitation.
But as the operator strives to enlist ever-greater numbers of subscribers, the other, harder limitations of distance and sheer user density will manifest themselves. Both range and the reuse of limited spectrum can be greatly enhanced by using adaptive-array smart antennas covered in Chapter 4 , but such technology comes at a cost premium.
Figure shows a typical example of an urban deployment. Rural areas with low population densities have proven most susceptible to successful wireless broadband deployments both by virtue of the generally open terrain and, perhaps more significantly, the relative absence of wireline competition. Whatever the site chosen for the wireless deployment, mapping the potential universe of users, designing the deployment around them, and considering the local topography are crucially important to wireless service providers in a way that they are not to service providers opting for DSL, hybrid fiber coax, or even fiber.
However, in the case of fiber, right-of-way issues considerably complicate installation. In general, a wireless operator must know who and where their customers are before they plan the network and certainly before they make any investment in the network beyond research.
Failure to observe this rule will almost certainly result in the inappropriate allocation of valuable resources and will likely constrain service levels to the point where the network is noncompetitive. Potential users have the right to question the reliability and robustness of infrastructure gear that has yet to prove itself over the course of several years in actual commercial deployments.
Obviously, such hard proof must remain elusive in the near term, but at the same time the downloadr should not conclude that IEEE standards work is absolutely exemplary and incorporates the conclusions drawn from exhaustive laboratory investigations as well as extensive deliberations on the part of the leading industry experts for a given technology.
IEEE standards are nearly always the prelude to the emergence of mass-produced chipsets based on the standards, and the major chip manufacturers themselves are looking at tremendous investments in development and tooling costs associated with a new standard investments that must be recouped in a successful product introduction. The effect of shoddy standards work would be to jeopardize the very existence of leading semiconductor vendors, and to date the IEEE has shown the utmost diligence in ensuring that standards are thorough and well founded.
Furthermore, the IEEE will not normally issue standards on technologies that are deemed not to have major market potential. For example, the IEEE has not set a standard for powerline carrier access equipment or free-air optical simply because those access technologies have not demonstrated much immediate market potential. In short, the creation of an IEEE standard is a serious undertaking, and I have yet to encounter a standard that is essentially unsound.
I think The following chapters indicate how to seize that opportunity. Furthermore, the different wireless networking technologies themselves exhibit widely varying capabilities for fulfilling the needs and expectations of various customers and enterprises. More than previous wireless standards, Broadband Fixed Wireless: The Competitive Context This section strives to answer the question, when is It is the first and most crucial question network operators have to ask themselves when considering the broadband wireless option.
In the metropolitan space, wireless broadband competes with the following: Of these rivals, several are currently entirely inconsequential. Broadband as opposed to medium-speed satellite services scarcely exists as yet, and powerline carrier services and PONs are scarce as well, though both appear to be gathering impetus. Pure IP and Ethernet metro services over fiber are growing in acceptance, but they are not well established, and ISDN has almost disappeared in the United States, though it lingers abroad.
Finally, free-space optics have achieved very little market penetration and do not appear to be poised for rapid growth. Other services mentioned previously—such as wavelength, 3G mobile, direct ATM services over active fiber, and metro Ethernet over active fiber—have some presence in the market but are spottily available and limited in their penetration thus far.
In this context, broadband wireless does not look nearly as bad as detractors would have it. If you consider the whole array of competing access technologies, broadband wireless has achieved more success than most. The largest enterprises that require large data transfers tend to prefer higher-speed optical services using both packet and circuit protocols.
Policies & Information
Circuit-Based Access Technologies Within the enterprise data service market, T1, fractional T1 E1 elsewhere in the world , and business-class DSL are the most utilized service offerings, along with frame relay, which is chiefly used to link remote offices and occupies a special niche. T1 is usually delivered over copper pairs and is characterized by high reliability and availability, reasonable throughputs, 1. Its limitations are equally significant.
T1s cannot burst to higher speeds to meet momentary needs for higher throughputs, and they are difficult to aggregate if the user wants ch T1s are also difficult and expensive to provision, and provisioning times are commonly measured in weeks.
Finally, T1 speeds are a poor match for 10 base T Ethernet, and attempts to extend an enterprise Ethernet over a T1 link will noticeably degrade network performance. Because it is circuit based and reserves bandwidth for each session, T1 offers extremely consistent performance regardless of network loading.
Maximum throughput speeds are maintained at all times, and latency, jitter, and error rates are well controlled. Were the bandwidth greater, T1s would be ideal for high-fidelity multimedia, but, as is, 1. Also, the infrastructure for these circuit-based access networks is expensive to build, but, since most of it has already been constructed, it is by now fully amortized. But, somewhat surprisingly, the sales of SONET and SDH equipment for the metro core have been increasing rapidly through the late s and the opening years of this century, and they are not expected to peak until Compared to newer access technologies, T1 does not appear to represent a bargain, but it is all that is available in many locales.
Moreover, the incumbent carriers that provision most T1 connections are in no hurry to see it supplanted because it has become an extremely lucrative cash cow.
Because of the apparently disadvantageous pricing, T1 services may appear to be vulnerable to competition, but thus far they have held their own in the marketplace. Ethernet and IP services, whether wireless or wireline, will probably supplant circuit-based T1 in time, but as long as the incumbent telcos enjoy a near monopoly in the local marketplace and are prepared to ward off competition by extremely aggressive pricing and denial of central office facilities to competitors, the T1 business will survive.
I suspect that T1 connections will still account for a considerable percentage of all business data links at the end of this decade. Frame Relay Frame relay is a packet-based protocol developed during the early s for use over fiberoptic networks see Figure Also, frame relay does not permit momentary bursting to higher throughput rates or self-provisioning.
Frame relay is rarely used to deliver multimedia and other applications demanding stringent traffic shaping, and it is never used to deliver residential service. Usually, frame relay is employed to connect multiple remote locations in an enterprise to its headquarters, and connections over thousands of miles are entirely feasible.
Frame relay switches or frame relay 15 ch Frame relay transmissions over long distances, commonly referred to collectively as the frame relay cloud, invariably travel over fiber and are usually encapsulated within ATM transmissions. Figure A relay network Frame relay services are largely the province of incumbent local phone companies or long-distance service providers.
Throughputs vary but are commonly slower than a megabit per second—medium speed rather than high speed. As is the case with T1, frame relay is a legacy technology, standards have not been subject to amendment for years, and not much development work is being done with frame relay devices.
The performance of frame relay is not going to improve substantially in all likelihood. Pricing is in the T1 range, with higher prices for higher throughput rates and special value-added services such as Voice-over Frame Relay VoFR. Also, provisioning of multiple remote locations can be prohibitively expensive with conventional frame relay equipment because the networks do not scale well, and this may limit the popularity of frame relay in the future.
Frame relay does not directly compete with wireless broadband in the metro, and thus targeting existing customers for the service makes little sense. Frame relay will continue to lose ground to enhanced metro Ethernet and IP services. The distinguishing features of the various substandards are not particularly germane to this discussion and have to do with the speed of the connection and the apportionment of available spectrum upstream and downstream.
DSL utilizes digital signal processing and power amplification to extend the frequency range of ordinary twisted-pair copper lines that were originally designed to carry kilobit voice signals and nothing faster. Aggressive signal processing applied to uncorroded copper can best this nominal limit by orders of magnitude. Commercially available systems can now achieve speeds in excess of kilobits per second over distances of a couple of thousand feet, ch DSL can support high-quality multimedia if throughput is sufficient and error rates well controlled, but the consistent achievement of high throughput rates is difficult if not impossible in many copper plants.
Plainly put, a DSL network overlay is highly dependent on the condition of the existing copper telephone lines because no one is going to assume the expense of rewiring a phone system—a move to pure fiber would make more sense if that were required.
Book Wimax Operators Manual Building Wireless Networks
In the presence of corroded copper, both the speed and distance of DSL transmissions are diminished the bestcase distance for moderate-speed transmissions is a little more than 20, feet. If the copper plant is compromised, the network operator has no choice but to shorten the distance from the subscribers to the nearest aggregation points known as digital loop carriers DLCs.
And since the latter are expensive to site and construct and require costly fiber-optic backhaul to a central office, they can burden the operator with inordinately high infrastructure costs if they are numerous. The pricing structure for a carrier owning the copper lines and central office facilities is entirely different from that of a DSL startup obliged to lease copper as well as equipment space in a telco central office.
A DSL network is certainly less expensive than new fiber construction because it leverages existing infrastructure, but it still requires a great deal of new equipment and frequently necessitates installation visits to the customer premises by field technicians. Despite these limitations, DSL services have been expanding rapidly all over the developed world, with especially extensive deployments in East Asia and the United States.
In the United States, DSL has found large and growing markets among small businesses and residential users. To a limited extent, DSL has been used to deliver video services to homes, but the primary offering is simple Internet access.
In neither the residence nor the small enterprise are valueadded services yet the norm. Typical speeds for residential service are in the low hundreds of kilobits and slightly higher in the case of business-class services. Some business-class services also offer service agreements in regard to long-distance transmissions over the Internet. VDSL and VDSL2, the high-speed variants, have the speed to enable advanced IP and Ethernet business services and high-quality converged residential services and, to that extent, must be regarded as a technology to watch.
The distances over which VDSL can operate are relatively short, however, little more than a mile best case, and VDSL networks require extensive builds of deep fiber. Only a fairly small number of such networks exist in the world today, though the technology is finding acceptance in Europe.
New low-priced VDSL modems are coming on the market that could speed the acceptance of the service somewhat, but that will not reduce the cost of the deep fiber builds necessary to support it. DSL is a new rather than a legacy technology, emerging from the laboratory about a decade ago though not subject to mass deployments until the turn of the century , but already DSL appears to be nearing the limits of its performance potential. Where wireless and optical transmission equipment have achieved orders of magnitude gains in throughput speed over 17 ch DSL may not be positioned to compete effectively in the future against other access technologies that have the potential for significant further development.
I think DSL is a transitional technology, one that was developed primarily to allow incumbent telcos to compete in the high-speed access market without having to build completely new infrastructure. I further think broadband wireless, as it continues to improve, will become increasingly competitive with DSL.
Finally, basic DSL technology was developed by Belcore, the research organization serving the regional Bell Operating Companies RBOCs , and was initially intended to support video services over phone lines, services that would enable the RBOCs to compete with the cable television companies in their core markets.
The first serious rollouts of DSL were initiated by independents, however, chief among them Covad, Rhythms, and Northpoint, all of which went bankrupt. Such collocation placed the independents in what was in effect enemy territory and left them vulnerable to delaying tactics and even outright sabotage.
Wi-Fi Handbook: Building b Wireless Networks
Dozens of successful legal actions were launched against RBOCs on just such grounds, but the RBOCs simply paid the fines and watched the independents expire. The wireless broadband operator should draw two lessons from this. First, do not enter into service agreements with competitors, if possible. Own your own infrastructure, and operate as a true independent. Second, realize that the incumbent telcos are grimly determined to defend their monopoly and will stop at nothing to put you out of business.
In the past, wireless has not posed a sufficient threat to RBOCs to arouse their full combativeness, but that will change in the future. Hybrid Fiber Coax The final major competitive access technology extant today is hybrid fiber coax, the physical layer utilized by the multichannel systems operators MSOs , industry jargon for the cable television companies see Figure Hybrid fiber coax consists of a metro core of optical fiber that frequently employs the same SONET equipment favored by the RBOCs along with lastmile runs of coaxial television cable.
Each run of cable serves a considerable number of customers—as few as 50 and as many as several thousand. The coaxial cable itself has potential bandwidth of 3 gigahertz, of which less than a gigahertz is used for television programming. Most cable operators allocate less than 20MHz of bandwidth to data. Industry research organization Cable Labs is currently at work on a new standard that is intended to exploit the full potential of coaxial copper and to achieve at least an order of magnitude improvement in data speed.
Should low-cost, standards-based equipment appear on the market supporting vastly higher throughputs, then the competitive position of cable will be considerably enhanced. In the past cable operators have proved more than willing to make large investments in their plants to launch new types of services. Wireless broadband operators as well as others embracing competitive access technologies would be well advised to watch their backs in respect to cable.
Cable is unlikely to stand still in the midterm. A hybrid fiber coax network The speed of a single coax cable far exceeds that of a DSL-enhanced copper pair, but since its capacity is divided among a multitude of subscribers, the speed advantage is manifested to the end user only when the subscriber population on each cable run is small.
Cable companies of late have been tending to restrict the number of customers per coaxial cable, but such a strategy is costly and will be pursued only with reluctance and with clear profit-making opportunities in view.
Cable data services aimed at the mass market date back to in the United States and today account for most of the residential broadband in this country, with DSL ranking a distant second.
Unlike the case with DSL, cable data access services are nearly always bundled with video and, increasingly, with cable-based telephone services and video on demand. Cable offers by far the richest service packages for the residential user, and historically the industry has demonstrated a strong commitment to expanding the number of services to cement customer loyalty. Cable services have historically garnered low customer satisfaction ratings, however, and in truth the actual networks have been characterized by low availability and reliability and poor signal quality.
These attributes, it should be noted, are not the consequence of deficiencies in the basic technology but are simply because of the unwillingness of many cable operators to pay for a first-rate plant.
Broadband access competitors should not be deceived into thinking that cable systems are consistent underperformers. MSOs have made some efforts to court business users but have been less successful than DSL providers in signing small businesses.
Cable does not pass the majority of business districts, and the cable operators themselves are often not well attuned to the wants and needs of the business customer. Nevertheless, some MSOs have pursued business customers aggressively, and the industry as a whole may place increasing emphasis on this market to the probable detriment of other broadband access technologies.
Already several manufacturers have developed platforms for adapting cable networks to serve business users more effectively; these include Jedai, Narad, Advent, Chinook, and Xtend, among others. Cable operators themselves are also beginning to download the new generation of multiservice switching platforms for the network core that will enable them to offer advanced services based on the Ethernet and IP protocols.
These companies are committed to building the most up-to-date hybrid fiber coax networks, and most of them are actively pursuing business accounts. The ultimate success of the overbuilders is unknowable—they are competing against strongly entrenched incumbents in a period where the capital markets are disinclined to support further ambitious network builds—but in marked distinction to the case with the telco CLECs, the major overbuilders have all managed to survive and to date are holding their own against the cable giants.
Placing cable data access within the context of other competing access technologies is somewhat difficult. Cable could be said to represent the repurposing of legacy technology just as is the case with DSL, but the basic cable plant has been transforming itself so rapidly over the past 15 years that such an interpretation seems less than fully accurate.
It is more accurate to say that the hybrid fiber cable plant is an evolving technology that is arguably on the way to having an all-fiber infrastructure in the far future. It may be that local telephone networks will trace a similar evolutionary course—the incumbent local exchanges ILECs have been issuing requests for proposals for passive optical networking equipment that would deliver fiber to the curb or fiber to the home—but cable networks lend themselves much more readily to a full conversion to fiber than do telephone networks.
My guess is that cable operators will convert more quickly than their telco counterparts. To the extent that this is true, cable emerges as by far the most formidable competitive access technology, and it is one that is likely to preempt broadband wireless in a number of markets. Cable offers potentially superior speed over DSL and certainly to legacy circuit services ; near ubiquity; fairly cost-effective infrastructure; a range of attractive service offerings; and a wealth of experience in network management.
Predicting the course that technological progress will take is difficult, but in the long term, extending into the third and fourth decades of this century, pervasive fiber will establish itself throughout the developed world, packet protocols will be ubiquitous at all levels of the network, and the resulting converged services or full services network will essentially be an evolved cable network.
The underlying distinctions between cable and telecommunications networks will vanish, and there will be only one wireline technology.
Wireless Broadband So where does all this leave wireless broadband? The singular strength of wireless broadband access technologies is the degree to which they lend themselves to pervasive deployments. At least in the lower frequency ranges, building a wireless network is largely a matter of setting up access points.
Subscriber radio modems are destined to decline in price over the course of this decade and will eventually take the form of specialized software present as a standard feature in nearly all mass-market computing platforms, a trend that further supports pervasiveness. Wireless networks will increasingly be characterized by their impermanence and flexibility, and as such they will be complementary to expensive albeit extremely fast fiber-optic linkages.
This book, however, focuses on the present, and current wireless broadband networks are not and cannot be completely pervasive, and subscriber modems are not extremely inexpensive, though they are falling in price.
In terms of price and capabilities, wireless broadband is competitive with T1, DSL, and cable; it is better in some respects and inferior and others, and it is highly dependent on accidents of topography as well as local market conditions. This section first describes how wireless speed and capacity compare with those of the major wireline competitors. The same spectrum at least potentially serves every customer within reach of a base station, and the network operator depends entirely on the media access control MAC layer of the network to determine how that spectrum is allocated.
A network operator could choose to make the entire spectrum available to one customer, but in most cases no single customer is willing to pay a sum sufficient to justify the exclusion of every other party. When a network operator does find a customer who wants to occupy the full spectrum and is willing to pay to do so, the operator will usually link the customer premises with the base station via two highly directional antennas so that the spectrum can be reused in a nearby sector.
In the lower microwave region, megahertz MHz constitutes a fairly generous allocation, and 30MHz is about the minimum amount necessary to launch any kind of broadband service. You can derive ultimate data throughput rates by utilizing the correct bits-to-hertz ratio. In the lower microwave region, current generation equipment can manage 5 bits per hertz under strong signal conditions, and that number may be expected to rise over time.
Thus, the total bandwidth available to most Bear in mind, however, that in a VDSL2 installation, each DSL line can deliver up to Mbps, and a network may have thousands of separate lines, so the comparison is not entirely apt. In general, Usually, And where the full bandwidth is used, which it often is, You should understand, however, that the bits-per-hertz ratio is generally poorer for higher-band microwave equipment—often no more than 1 bit per hertz, so the generous bandwidth allocations are to some extent offset by the limitations of the equipment in terms of spectral efficiency.
Rather recently, spectrum has been opened above 50GHz in the United States where spectral allocations in the gigahertz are available, and these offer the possibility of truly fiberlike speeds. Then, too, it is not difficult to use such microwave equipment in tandem with free-air optical equipment that is also capable of throughputs of gigabits per second.
Thus, aggregate throughputs of 10 gigabits per second Gbps could be achieved through an airlink, the equivalent of the OC standard for single wavelength transmissions over fiber.
You must balance these manifest advantages against the poorer availability of the airlink compared to that of fiber. Not a tremendous number of enterprise customers are interested in very high speed and only moderate availability. Usually, customers for high-throughput connections want a very predictable link. The overall capacity of a broadband wireless network as opposed to the maximum throughput over any individual airlink is largely a function of the number of base stations in the network.
The denser the distribution of base stations, the more customers can be accommodated—provided, that is, that the power of individual transmissions is strictly controlled so that transmissions within each individual cell defined by the base station do not spill over into adjacent cells. The situation is almost analogous to cable networks where capacity can be increased by almost any degree simply by building more subheadends and more individual runs of coaxial copper.
And unless a base station is equipped with an adaptive array antenna, it cannot remotely compare in capacity with a cable subheadend that can easily accommodate many thousands of users. Even so, wireless cable service providers could not compete effectively with conventional cable operators, and most went out of business within a few years. I should point out that the difficulty in reaching all potential customers is much greater with higher microwave operators covered by Any building that is not visible from the base station is not a candidate for service—it is that simple—so the capacity of any individual base station is quite limited.
Unfortunately, base station equipment intended to operate at these frequencies is usually far more costly than gear designed for the lower microwave region, so the construction of additional base stations is not undertaken lightly. Another extremely important limitation on capacity affecting upper microwave operators is the nature of the connection to the individual subscriber.
Upper microwave services are invariably aimed at business users. Because of line-of-sight limitations, the operator looks for a lot of potential subscribers within a circumscribed space—a business high-rise is the preferred target.
Instead of providing a subscriber terminal to every subscriber—which is a prohibitively expensive proposition because subscriber terminals for these frequencies are nearly as expensive as base station equipment—the operator strives to put a single terminal on the roof and then connect customers scattered through the building via an internal hardwired Ethernet, though a wireless LAN could conceivably be used as well. Distributing traffic to subscribers may mean putting in an Ethernet switch in an equipment closet or data center, and it will undoubtedly involve negotiations with the building owner, including possibly even recabling part of the building.
Successfully concluding negotiations with the owners of all the desirable and accessible buildings is a difficult undertaking, and the network operator is unlikely to be wholly successful there. In contrast, the incumbent telco offering high-speed data services will already have free access to the building—the real estate owner can scarcely deny phone service to tenants and expect to survive—and thus has no need to negotiate or pay anything.
Owners can simply offer any services they see fit to any tenant who wants to download it. Such real estate issues do not constitute an absolute physical limit on capacity, but in practical terms they do limit the footprint of the wireless network operator in a given metropolitan market. One has only to review the many failures of upper microwave licensees to build successful networks in the United States to realize that despite the ease of installing base stations compared to laying cables, microwave operators do not enjoy a real competitive superiority.
As was the case with throughput, so it is with total network capacity. Whatever the frequency, wireless broadband does not appear to enjoy any clear advantage.
I have already discussed availability and reliability. Wireless will always suffer in comparison to wireline, and wireless networks are apt to experience a multitude of temporary interruptions of service and overall higher bit error rates. In their favor, wireless networks can cope much better with catastrophic events. Though an airlink can be blocked, it cannot be cut, and the network can redirect signals along circuitous paths to reach the subscribers from different directions in the event of strong interference or the destruction of an individual base ch This is not necessarily possible with most equipment available now, but it is within the capabilities of advanced radio frequency RF technology today.
In terms of QoS and value-added services, broadband wireless is getting there, but it is generally not on par with the wireline access media. The first generation of So what is the competitive position of wireless broadband, and where does it have a chance of success? Each of these access methods is fairly well proven, the argument goes. Moreover, the plant is paid for; that is, the infrastructure is fully amortized. Incumbents offering these services can and will temporarily slash prices to quash competitors, so the wireless operator cannot necessarily compete in price even in such cases where wireless infrastructure can be shown to be much more cost effective.
Still, I am not so pessimistic as to concur with the position that wireless cannot compete against wireline under any circumstances and that it must therefore enter only those markets where there is nothing else. The case for wireless is not nearly so hopeless. A great multitude of business customers, perhaps the majority, is badly underserved by the T1 and business DSL services that prevail in the marketplace today. They are both slow and relatively expensive, and the service providers make little effort to make available value adds such as storage, security provisions, application hosting, virtual private networks VPNs , conferencing, transparent LAN extensions, self-provisioning, and bandwidth on demand.
T1 services, to put the matter bluntly, are a poor value. None of these limitations apply to the wireless broadband networks of today, and the wireless operator can compete against incumbent wireline services simply by offering better and more varied service offerings and by being more responsive to subscriber demands—this despite the fact that wireless equipment lacks the capacity or the availability of wireline networks.
Competing for residential customers is much more difficult. Wireless broadband cannot transmit video as effectively as can cable and cannot ensure the same level of availability for voice as twisted-pair copper, so it lacks a core service offering.
All it can provide in the way of competitive services is Internet access and value-added business services. Wireless can, however, afford the user a measure of portability and even mobility, and that may actually be its principal selling point in respect to residential customers. Metricom, the first company to offer public access over unlicensed frequencies, achieved some measure of success by emphasizing portability and later full mobility.
Determining When Broadband Wireless Is Cost Effective Network operators contemplating a wireless access approach in a given market should consider not only the initial download price of the infrastructure but the following factors: If one did not have to lay cable, which was clearly the case with wireless, then one neatly avoided the enormous installation costs associated with digging up the street and encasing the cable in conduit or, in some cases, in concrete channels, as well as the expense of running cable through buildings to the final destination at an Ethernet or ATM switch.
Wireless broadband appeared to enjoy a clear and indisputable advantage, and who could possibly suggest otherwise? Several years and hundreds of business failures later I can say only that wireless is less expensive and more cost effective in some individual markets and not in others. Wireless indeed eliminates cabling in the final connection between the operator-owned network node and the subscriber terminal and in that manner avoids a highly significant cost factor.
Cable excavation and installation can run anywhere from a few thousand dollars a mile in rural areas to more than a million dollars in large cities, with median costs running in the tens of thousands of dollars per mile these figures are based on my own primary research. But such comparisons ignore that two of In most cases, fiber access networks require new builds and are inordinately expensive, but not invariably, and any wireless operator who assumes a competitive advantage over a fiber-based service provider because of cost without any clear evidence to support that assumption may be sadly mistaken.
Another major cost factor that was initially ignored in most cost estimates for broadband installations is the so-called truck roll, industry slang for the professional installation of the subscriber terminal. Except in cases where a connection to a subscriber terminal is already in place waiting to be activated, or where a portable wireless device such as a cell phone is involved, something has to be done to make the connection to the network.
Generally in the case of broadband wireless, that something will involve the installation of at least two devices on the subscriber premises: Because of the vagaries of subscriber terminals, the unfamiliarity of many users with broadband services, and the lower degree of automation in earlier broadband access equipment platforms, most broadband service providers in the past elected to send technicians out to perform the installation and make sure that the service was adequate.
Such is still almost invariably the case with very high-speed services offered to large enterprise users. Because of the normal problems arising when almost any new, complex, and unfamiliar technology is introduced into a computing environment, many customers required two or more truck rolls, each costing approximately as much as the first. Obviously this was a serious problem, one that detracted greatly from the profitability of early broadband services.
Each kind of broadband service poses its own peculiar installation problems, as it happens. Fiber demands precise trimming and alignment of the optical fibers themselves, DSL requires testing and qualifying every copper pair, and cable requires various nostrums for mitigating electrical noise and interference.
With wireless it is primarily positioning and installing the antenna. Wireless may generally be said to pose the greatest installation difficulties, however, at least in terms of the subscriber terminal. In some cases involving a wireless installation, the installation crew has to survey a number of locations on or in a building before finding one where the signal is sufficiently constant to permit operation of a broadband receiver, and that process can consume an entire workday.
Rooftop installations serving several subscribers, which are commonplace in millimeter microwave installations, require the construction of large mountings and extended cable runs back to subscriber terminals and therefore can be very expensive.
And, worst of all, the RF environment is dynamic, and an installation that experienced little interference at one time may experience a great deal at some future time—necessitating, you guessed it, another truck roll. With all current millimeter microwave equipment and first-generation low microwave components, one could expect difficult and costly installations a good deal of the time.
Second-generation non-line-of-sight lower microwave equipment, on the other hand, is usually designed to facilitate self-installation and indoor use, eliminating the truck roll in many cases and appearing to confer a decisive advantage on wireless.
But unhappily for the wireless operator, cable and DSL have their own second generations, and self-installation of either technology is fast becoming the norm. Furthermore, self-installation normally confers no performance penalty in the case of cable or DSL, but with wireless, this is not the case. With an indoor installation the easiest type to perform because the user is not obliged to affix mountings on outside walls or roofs , the effective maximum range of the link is much reduced, and the network operator is consequently obliged to build a denser infrastructure of base stations.
In some instances this considerable added expense may be offset by the reduction in truck rolls, but not always. Incidentally, regarding this matter of truck rolls, outsourcing is rarely advisable. Companies that maintain their own crews of technicians can sustain installation costs at a fraction of those charged by contractors—something to think about when planning a rollout. Assembling and training installation crews may be time consuming, but outsourcing may simply be cost prohibitive.
Setting up wireless base stations must be viewed as another major cost factor in building a wireless broadband network.Submillimeter Microwave: Wireless may generally be said to pose the greatest installation difficulties, however, at least in terms of the subscriber terminal.
London: Sage. Directional antennas ordinarily work only with fixed installations where subscriber terminals ch But the network operator should spend an equal or greater amount of time concentrating on the business case and whether a network can in fact be constructed in a given geographical market that sells its services profitably.
- UNIX NETWORK PROGRAMMING RICHARD STEVENS PDF
- CONTROL SYSTEMS ENGINEER TECHNICAL REFERENCE HANDBOOK PDF
- SOUND ENGINEERS HANDBOOK PDF
- MATHEMATICAL HANDBOOK FOR SCIENTISTS AND ENGINEERS PDF
- WIRELESS COMMUNICATION NETWORKS PDF
- TRAFFIC ENGINEERING HANDBOOK PDF
- BUILD PDF PRINTER
- COLLEGE ENTRANCE EXAM REVIEWER PDF
- KINDLE BOOK AND TRANSFER VIA USB
- GUARDIANS OF THE GALAXY SCRIPT PDF