News

Stay up to date on the latest in intelligent building solutions, infrastructure, and innovations from Paige Datacom Solutions.
  • Technology News
  • 05.22.2019

But what about Coax?

What about Coax? This question comes up often when we discuss GameChanger cabling for CCTV video distribution. In fact, the comparison between twisted pair cables in general compared to coaxial cables arises with respect to costs, bandwidth, Ethernet over Coax (EoC) devices, MoCA (Multimedia over Coax), replacing older coax solutions, and media selection decisions in general.

Traditionally coax was the media of choice for short and longer haul CCTV video transmissions as it breaks the 100m barrier (like GameChanger) but only at lower bandwidths. Coax traditionally enjoyed a reign as the media of choice for both CCTV and residential cable distributions. The bandwidth that can travel over the coax will depend on the length of the channel, the strength of the signal, etc. Sending just video over coax is probably the easiest transmission. However, today, just video is simply not enough for most enterprises. The need to pass Ethernet traffic is equally as important. 

In order to have Ethernet over Coax, converters and the 100m rule are used. Generally, the coax fits into the long-haul port of the converter/transceiver, and a traditional patch cord is used to connect the transceiver to the end device. These transceiver devices run on average about $250 each for 10/100 and nearing $400 for Gigabit. Although these devices have gigabit ports on them, they do not actually extend a gigabit signal over the coax backbone (see MoCA below). The speed of the transmission decreases with length with the maximum speed generally around 144Mb/s over the coaxial link. One device is needed at the central location and one for each end device.

MoCa can be bonded (multiple channels together) to allow for gigabit transmissions over Coax. For full MoCA to work, one device is needed at the end point of each coax cable with an RJ45 category patch cord connecting the end device. In order to receive a full gigabit signal over MoCA, one needs a bonded 2.0 adapter. These are popular in residential applications where coax has already been distributed. Each MoCA capable adapter requires a power connection to AC power which increases the cost of the channel compared to a PoE link over twisted pairs.

When AC power is required, an AC power point must be installed the cost of which can range from $200-$800 with the average being about $350 for a simple install. Alternatively, an additional power carrying pair on a Siamese Coax/18-2 cable can be used. In this scenario, the termination time is slightly longer due to the fact that both the coax and the 18/2 or power carrying cable must be terminated. Should either side of this cable have a fault, either the entire cable must be replaced, or the failing media side must be replaced and terminated. The additional cable width may be impossible to accommodate in the conduit or pathway provided. The power is limited by the conductor size and code requirements.

A PoE switch can be used for Ethernet and power natively over category twisted pair without additional hardware for 100m configurations. For GameChanger, the distances supported are 850’ for 10Mb/s Ethernet and 656’ (200m) for 1Gb/s without any additional devices. One simply needs the switch supplying power and the device that uses it transmitting Ethernet and PoE over the same cable.

The same is not true for coax communications that need PoE. PoC (Power over Coax) adapters are available, although many varieties are active meaning that they require a power connection. The bandwidth at 200m is still only 100Mb/s as opposed to Gigabit with GameChanger. But more notable is the power limitations with many of these devices.

GameChanger can fully support all classes/types of PoE over the full 200m. This makes features like PTZ (pan, tilt, and zoom) available to the end device without having to include an AC power point at the end of the channel or additional transceivers in the channel. For PoE voltage loss charts, click here. The charts show full support for all types and classes of PoE using GameChanger.

For a dollar and cents comparison based on MSRP at 850’ and 656’ contrasting the cost of the coax, transceivers and installed AC power connections the GameChanger is less expensive on a job than the equivalent coax with adapters and AC connections. The GameChanger cable is slightly more expensive, but the overall complexity, risk, and points of failure are reduced to endpoints as opposed to every 100m (328’) with Ethernet over Coax.

 


Extenders and AC Power
Extenders Only
GameChanger w/ PoE+
10Mb/s 850’ (259m)$1,336.45$586.45$429.00
Gb/s 656’ (200m)$1,731.71$981.17$328.00


We acknowledge that there is a wide variety of adapters, pricing, and quality. Likewise, there are distance limits based upon the switch launch power and wire gauge configurations. The numbers above are based on the average price of equipment, $250 per AC power outlet, and average transceiver prices as available at the time of writing.

  • Technology News
  • 01.10.2019

Testing RFC 2544 and GameChanger

The document identifying the test parameters from IETF (Internet Engineering Task Force) is designed to stop what they call “specmanship” from vendors to give themselves a better position in the marketplace, as they feel this practice often involves smoke and mirrors to confuse the potential users of a product. The tests are available for vendors to use and will provide users with test results in an apples to apples (device to device) fashion in a comparable comparison format. In this case, the DUT (Devices Under Test) are a pair of Viavi TBERG 5800 testers. The media over which the tests are run is the GameChanger cable. Connections are made from transmit of one device to receive on the other.

In the testing scenario, the DUT (Device Under Test) is connected to send and receive ports on a tester. Send of the DUT is connected to the receive on the tester and the receive on the DUT is connected to the send port of the tester. The idea if the tests is to determine frame loss over the transmission, latency and other parameters. This test differs from some of the other Ethernet tests in that it can be used both on the Local Area Network and over carrier Ethernet WAN links. This allows testing of a variety of equipment which can be placed inline between testers and can certify the performance for extended demarcations and other latency sensitive equipment. The question of testing arrised over just such a query where additional devices made it very difficult to determine where the slowdown occurred and which equipment was at fault.

The tests performed are done so with a variety of frame sizes, or at the least, the minimum and maximum frame sizes that are used in the protocol under test on the media under test and enough in between sizes to get a full characterization. The recommendation from the IETF is that at least 5 frame sizes be used. For Ethernet, the recommended frame sizes are 64, 128, 256, 512, 1024, 1280 and 1518 bits. Frames such as keep alive frames and routing update frames should be discarded in the test.

The Viavi testers offer complete TIA 568 and ISO 11801 copper and fiber certification along with the RFC testing mentioned. This adds Viavi to our list of test manufacturers that can test and certify GameChanger cable prior to the active equipment being added to the channel. This also means that the cables can be tested as part of extended demarcations and other latency sensitive uses prior to devices being added. The problems with introducing electronics, repeaters and other inline devices are risk, complexity, security, and in fact, these risks are so great that in some secure environment, they simply aren’t allowed. This means that additional IDFs/telecommunications areas must be defined and deployed at a far heftier price tag than a simple length optimized cable will cost in a deployment.

To view the test results, click here. For more information on GameChanger cable and other Paige DataCom products, please visit our webpage or contact your local Paige representative.

  • Technology News
  • 06.04.2018

Lengthonomics at 1006' with no repeaters. Yes, really!

As more and more companies are learning the value (and savings) of lengthonomics, Code Blue has mastered the savings with GameChanger and their CB 5-s unit, and VoiP phones. With 802.3af PoE (Power over Ethernet) the channel distance achieved was the full 1000' reel plus 3' patch cables at each end for an overall channel length of 1006'. Code Blue is a manufacturer of security devices including incident response, emergency signaling, help points, and systems management all in an open format enhancing interoperability with other systems. The CB 5-s is an economical pedestal Help Point highlighted by a high intensity LED beacon/strobe light that provides exceptional visibility and acts as a deterrent to potential crime. This blue light emergency tower provides direct communication with first responders and can extend security efforts to walkways, parking lots, open campus areas and more. According to the Code Blue Tech team,

"We hooked it all up and used our CB 5-s unit with your cable and it works great. Overall distance is 1006' connected to a 802.3af switch. The cable runs into a PoE splitter in our unit. The splitter then is powering our blue light, our faceplate light, and our IP5000 speakerphone. This is a nice product."

Adding GameChanger to a CodeBlue install provides significant savings over having to install IDFs or harsh environment-friendly power sources every 100m as the GameChanger channels in this application supports 3x the standard distance with these CodeBlue devices. With a variety of products that operate in building perimeter and outside locations like parking lots, having a cable that can carry the signal to farther locations is particularly attractive for more than just economic reasons. Any time you eliminate an intermediate point of failure, you decrease the risk of a device going down or being tampered with by those that would benefit from it going down. With security applications, this is intrinsically desired and allows installers to provide savings to their customers for more competitive bids.

To understand more about the CodeBlue portfolio of products, a short description of their main product lines follow. Help points provide a means for the public to access emergency services in places like parking lots, walkways, public retail areas, parks, etc. These points can be either an intercom, phone, hot button, or some combination as they are highly customizable for mounting, camera integrations, and other options. CodeBlue's Emergency signaling is more than just an intercom as the solutions are full duplex allowing first responders to interact with those generating the signal or within a building and the devices interact with objects such as triggering flashing lights, opening AED doors and integrating with access control devices and can be equipped with bezels, buttons, keypads and have a variety of mounting options in a variety of sizes. All of these are supported by a system interface solution that also acts as a gateway between analog and VoiP and can interface with incidence response systems.

To learn more about Paige's two-time award winning GameChanger cable, part of our complete line of intelligent building and data center products, visit https://paigedatacom.com/gamechanger, or contact your friendly Paige sales person. More information on the awards (CI&M Platinum Innovators Award and ISC West, Best in Video Surveillance Accessories) can be found on our blog.

About Code Blue Corporation: Safety has always been the No. 1 priority for Code Blue Corporation (www.codeblue.com). Located in Holland, Michigan, the industry pioneering manufacturer of emergency communication solutions provides assistance to people by handcrafting products that are reliable and accessible. From our iconic blue light phone pedestals to our award-winning software, we help people feel safe by offering durable and visible security solutions that provide help at the touch of a button, while assisting first responders before, during and after an incident with a complete end-to-end system that utilizes alerting, managing, archiving and responding technology.

  • Technology News
  • 05.08.2018

Lenghtonomics

Lengthonomics and the Benefits of Reach

Top of rack switching brought about shorter length system interconnect cables to connect servers in the same rack to a switch in the same rack or an adjacent rack. The arguments for short reach cables stemmed primarily as an avoidance to blobs of structured cabling that were caused by data centers that grew out of need rather than planning, insufficient room for larger diameter category 6A cables in pathways and cabinets, and poor change management. At the time, moving to fiber in pathways and copper in cabinets seemed to make perfect sense. However, this was not true in all cases.

While hyperscale data centers and data centers that can support higher kW per cabinet can make use of all the switch ports in a cabinet, data centers at lower densities can’t. So those lower density data centers began using longer reach cables and connecting other adjacent cabinets, or in some cases Active Optical Cables (AOC) to reach end of row cabinets and those elsewhere in the data center taking advantage of the “lengthonomics” of longer cables which open up switch ports to a greater number of servers in a more cabinets than a short reach cable allows. However, stringing longer cables without proper pathway planning can lead to a mess of cables which is exactly what the short reach cables were supposed to stop.

A Blast from the Past

Data Centers used to have main (central) distribution areas for networking much as they still do for storage arrays. Today you see a mix of solutions in most data centers. The tide in many data centers is shifting back to more centralized solutions as budget savvy employees pay attention to the benefits of reach. This is also true for fiber in the riser systems of intelligent buildings and even for extended reach structured copper products (like the GameChanger). Longer lengths of cable allow you to purchase fewer pieces of intermediate electronics and eliminate the need for intermediate areas requiring power and equipment. That is "lengthonomics."

So how does one determine the lengthonomics of any cable run? It is important to take into account the entire communications channel inclusive of all active equipment contained within the channel. In addition to the equipment included, risk factors should be part of the decision as the more equipment you have, the more potential points of failure exist. The total power consumption of all active equipment is another factor in the equation.

Fiber Lengthonomics

For fiber applications, multimode fiber has had an advantage due to the lower cost of the electronics. So, from point A to point B inclusive of the electronics, the channels were significantly less expensive. When examining newer fiber over increasingly higher speeds, there is a diminishing advantage for mulitmode fiber. Hyperscale data centers consume a lot of single mode electronics which has driven down the price of single mode electronics from what used to be a cost factor of about 10x their multimode counterparts to now roughly 2-3 times that cost. This alone is driving more backbones and data centers to single mode.

There are other benefits of a single mode infrastructures in that there are only two strands to worry about. Single mode supports all applications (SAN, WAN, backbone, networking) over two strands so there is no need have a mixed infrastructure where singlemode has only the longer connections and multimode in its various iterations supporting shorter lengths. Large core switches can server a greater area eliminating intermediate switches and points of failure. The costs of those switches is slightly higher, but the power consumption of a core switch can be significantly less than a core switch with multiple edge switches. Two strands take a lot less room in pathways than multiple strands. The fiber is less expensive. And longevity is a very persuasive economic factor.

Multimode fiber has had several iterations from OM1 to now OM5. With higher speeds, there are multiple strands to contend with making polarity more difficult to manage. And each iteration that is installed and removed and replaced carries 3x the labor burden of one OS1/OS2 channel that has survived all multimode iterations. In fact, even with the prior higher costs for electronics over the years, in hindsight singlemode may have been the more economically responsible solution after taking into account electronics and cable replacements.

Copper Lengthonomics

For copper data center applications, the same calculations should be made. Within lower density data centers, it may be more cost-effective to use end of row equipment than top of rack to take advantage of the extended distance that a longer reach cable can provide, so that fewer switches are needed. In fact, the Communication Cabling and Connectivity Association published a paper outlining these considerations. Proper planning can provide economic advantages with cabling zones that maximize use of ports in electronics lower power. In some data centers, a combination of short reach cables for higher density areas may compliment end of row zones for lower density areas.

Building Lengthonomics

For intelligent building applications, lengthonomics also are apropos. Every time you have to add an IDF, repeater, transceiver or other means to go to an extended distance, you must consider the risk factors, additional costs (CAPEX and OPEX), and additional security requirements. Longer SMF runs may decrease equipment needs. For copper, GameChanger can fully support Gigabit Ethernet (GigE) 656’ with PoE+ support removing many of the risk factors for WAPs and other devices that fall just outside of the 100m mark.

Likewise, other access control and security devices that have traditionally operated over higher gauge solid wire do not translate well into twisted pair channels and in some cases violate codes depending on the type of signals that traverse the cables. Again, length matters and can provide a savings.

In summation, very little with regards to copper and fiber cabling should be an afterthought. There is significant savings to be had by taking advantage of reach. Risk and avoiding additional points of risk should always be considerations and that old adage “KISS” Keep it Simple Silly still rings true. Install once; use a long time, and reap the rewards. For further information and ways to calculate your lengthonimics consult your friendly Paige salesperson.

Categories
Industry News
14 Articles
Products & Innovation
6 Articles
Announcements
7 Articles
Technology News
10 Articles
Press Releases
1 Articles
  • Technology News
  • 03.22.2018

State of the Data Center - Tides are Shifting

Full credit for one survey goes to AFCOM and Data Center Institute. A link to their site can be found at www.afcom.com and from there, you can learn more about the exciting work the Data Center Institute is doing in support of vendor-neutral trends in the industry. The topics below are based partly on this study and also other available studies in the industry.

Surprise 1 - the majority of respondents in the AFCOM/DCI survey said that they will be adding capacity to their own data centers. This is a far cry from "everyone is going colo." While colos play an integral part in the data center space, they have gotten the lion's share of attention as THE place to house company data. The truth remains that many enterprises use a hybrid model, of some in-house, some colo and some cloud facilities. The new tax laws set to take place in 2019 may have an impact on colos as the OPEX benefit is no longer the attraction it used to be. In short, the full amount of the lease must now be shown on a company's balance sheet. No ramp up periods are allowed. The government accounting office estimates 3 trillion in undisclosed debt due to the way the reporting happens today. This alone may drive more attention to enterprise-owned data centers as on-prem data centers also realize depreciation offsets that are not available when someone else (the colo) owns the equipment.

Surprise 2 - DCIM is not getting the traction that some folks would have expected and some folks predicted. Across the board, when one examines the predictions, barriers to adoption and other market indicators, DCIM is not always the answer. A few things stand out in the barrier to adoption category. One, there isn't a quick and painless solution for implementing DCIM. Even if a company has full documentation of their data center, in most cases, that work will have to be redone in the DCIM package. In large data centers, this is a significant amount of time. Barrier 2 lies in the fact that some DCIM packages are far to complex for the needs of the DC. Suppose a company went down the route of managed power long ago, their DCIM needs will be different than a company with no start. Barrier 3 lies in the fact that most colocation facilities have this monitoring themselves, and it is therefore not needed across an individual company's space. Barrier 4 lies in software-defined technologies that put resources in "virtual" locations, so the need to manage at the cabinet level turns to tasks that are for implementation and decommissioning only, as the assets themselves don't move any-more.
Surprise 3 - Cloud isn't the end all be all that people expect. Personally, I believe that this lies in part to the fact that end users have had mixed luck with cloud services, most importantly personal email accounts. When the cloud providers give the same experience to the individual that they do for companies, then public cloud applications will receive a better reception. Another barrier to full public cloud adoption is security. Quite frankly, every company has some "secret sauce" or other information that they would never place in a public cloud where they had to relinquish control of security. Hybrid cloud platforms are here to stay.

Not a Surprise 4 - There is a critical shortage of IT personnel, and this trend is not getting better. There is also an inequality of women in IT proportionate to men, and even more of a divide at the top. I have spoken on several panels of women in the industry. The greatest advocates seem to be fathers. But no clear solution has ever emerged as "the" solution. We need to attract talent at a far younger age. High school and college are not the right time to foster an interest STEM or STEAM careers. Further, trade schools fell out of favor, and as such, many of the two-year programs went by the wayside with it. There are many organizations that are beginning to have data centers as career paths, buts they are few and far between. Certifications are one way to launch a career in IT or data centers, but companies need to begin to recognize and hire those without 4 year or master's degrees into their IT departments for these to be effective. We are certainly missing out on some highly technical and able people as their resumes get passed over for not meeting an education requirement because they were busy working instead of being in school.

Not a Surprise 5 - Consolidation is a continuing trend in IT. It seems that every 5 years or so, we go from a myriad of options to a few. That trend will continue and new companies will continue to sprout around new technologies and solutions to everyday problems. Our hats off to everyone that contributes to the industry in all capacities. The tools we have available today to support our data needs are vast! We look forward to seeing where things go next year!

  • Technology News
  • 07.23.2017

Are Fabrics/Unified Computing the Solution for Rapid Deployment and ease?

This question has been posed to me several times. My answer is…that depends on the question! Fabrics do provide some advantages for some and not so much for others. For instance in a cluster environment for high speed computing, or when a company needs to rapidly deploy storage, servers and networking . However, I think the real answer is that there isn’t an “end all be all” to every data center as needs vary from one DC to another and from one application to another in a data center.

The real answer is that there isn’t an “end all be all” to every Data Center as needs vary from one to another.

First and foremost, a business needs to perform risk management across the depth and breadth of applications, data stores and talent. The assessment needs to include the worst “what if” scenarios should the equipment and application not be available. Once you know the business risks, you need to evaluate your business goals and what IT can do as a service to grow and support your business. It is rare these days for any data center to do a wholesale upgrade in an existing facility. Rather upgrades happen through attrition, or in some cases due to a move or another site coming online due to capacity issues and/or consolidation.

The process determining whether a solution is great or not begins after you have a direction and risk assessment. This step I call due diligence. I view this as the most critical step. During the due diligence phase, IT should be tasked with testing the hype. (See next week’s blog on FCoE versus Fibre Channel for a good start here). When I say testing the hype, I mean pick apart the marketing claims. Remember it isn’t a savings if you weren’t going to spend that money to begin with. Also, plug the hype back into the risk assessment. One throat to choke may sound good, but putting all your eggs in one basket can introduce risk as well. Evaluate the fabric supplier. Ask the following:

  • Where is your manufacturing and how many facilities do you have? (Think of the recent tsunami or any other natural disaster – this could have a negative impact on your business if you lock into a single vendor and they aren’t as redundant as you thought).

  • What is the longest lead time you have exhibited for a product in the last 5 years and why? (Barring natural disaster, how is their supply chain set for influxes of orders, etc.)

  • How many internal staff do you have dedicated to this solution, and how long have they been with the company? (During acquisitions, some talent leaves and others shift. This is your line of support when the going gets tough).

  • How much stock do you keep on hand for replacement parts? (Obvious)

  • Can you provide power calculations on each component? (Often you see literature hype up low power at a switch port for instance but the required network card is significantly higher).

  • Do you support open systems? (You need an attrition path and options if any of the above fail. (Many active electronics manufacturers are locking down their cables so that you have to buy their cables for a system to work, for example. You want open systems so you aren’t locked into a single solution to decrease risk. What happens if you consolidate data centers? Do you have to move everything to the new stuff, or can you move your old stuff and have it work well in the playground?).

  • How much of my current equipment will work with the solution?

  • What complimentary partners can you demonstrate interoperability?

  • What do you find fallible in your competitions products? (slippery but informative!)

  • Find independent test results where possible and understand the testing and conditions under test. Companies like the Tolly Group and others do independent benchmark testing under LIKE conditions. The keyword being LIKE.

  • Lastly, do yourself a favor and look at their case studies and press releases and call those companies to find out if the stuff worked without failure and how it has performed over time. Hard to do with new solutions but a wealth of gotchas to consider before you embark on your journey. Skip the references they will give you people that say wonderful things.

  • Technology News
  • 06.15.2017

PDU Efficiency Matters! Metrics at Work.

What if you found out you were paying for power you were not consuming? What if you were working on much needed metrics in your data center only to find out that your input numbers were off by a few percent? What if you found out your checking account were out of balance by a few percent? Certainly, none are acceptable by any stretch of the imagination.

PUE, CUE, SCE AND DCCE

Metering is important for, at least PUE, but more importantly for other metrics such as SCE (Server Compute Efficiency) and Carbon Usage Effectiveness (CUE) and other efficiency metrics. All require at least some level of IT energy monitoring. PUE was the first metric we began using in the data center. There are a few problems with PUE especially as to how it is reported. A new PUE version 2 helps data centers report not just a PUE number, but also how often and where the metric is measured. But most organizations have realized that it is nothing but a start when examining the efficiency of a data center.

Other useful metrics have been developed to determine other data center efficiencies. For a full list and to participate in the document process, I highly recommend membership in The Green Grid. CUE measures IT energy in ratio to CO2 emissions:

CUE=total emissions caused by the Total Data Center Energy / IT Equipment Energy

Unlike PUE which strives to be as close to 1.0 as possible, the goal of CUE is to reach 0.0. But again, accurate measurements are critical.

The best form of monitoring is at the outlet level. While strip/cabinet level monitoring will provide some information, it will do little to help a center determine the effectiveness of individual pieces of processing equipment. For instance, building on other metrics one can determine how well individual pieces of equipment perform.

Another useful metric of efficiency is ScE or Server Compute Efficiency which is a subset of DCcE or Data Center Compute Efficiency. ScE is a time based metric. This metric looks at the time a server spends providing primary services. The assumption is that secondary services (virus protection, patch management, etc.) would not be necessary if the primary server was not live. Beyond ScE, taking the weighted time of server CPU activity as a metric of power consumed by the server can provide meaningful information on a server by server basis as to the effectiveness of power consumed by those CPU cycles. Again, accurate power numbers are critical.

PDU MARGINS FOR ERROR

As data centers ramp up the level of metering, one often missed metric is the monitoring equipment margin for error. In looking at several PDU specification sheets available online, the variance is pretty great. One finds stated efficiencies from 1% to over 5% as a margin of error. To put this in simple terms, if you bank balance is $1000.00 and you have a 1% margin for error, that means the money could range from $990.00 to $1,010. At 5%, the balance could range from $950.00 to $1050.00, or simply the difference from $10 to $50 dollars. While this may not seem like much, power consumption in data centers far exceeds $1,000. In 2013, according to NRDC.org, data centers consumed an estimated 91 billion kilowatt hours of electricity in the US at a cost of nearly $13 billion.

Average commercial power rates in the US range from 8.64 to 36.90 cents per kWh with the average near 13 cents. For 24x7 operations, 365 days a year, at 1,000 kW (1MW) per year, 0.13 per kwH the total cost is $1,138,800. A 1% variance on that number is $11,388 and a variance of 5% could cost a whopping $56,940. Over 10 years in a data center, that amounts to over half a million dollars.

As metrics become increasingly important as a tool to increase efficiency in data centers and data center equipment, the accuracy of the monitoring must be considered. No one wants to overpay. And no, it isn’t productive to count on negative variance either. When looking at PDU’s, look at the metering accuracy. When working with colocation providers, especially if they are passing on used power costs, find out what PDUs they are going to use for measuring. If you are comparing two sites, that 5% could make the difference in your decision.

Get The Latest

For the Latest News & Information, sign up for our newsletter.