The Edge as a Disrupter to EVERYTHING

NewsIndustry NewsThe Edge as a Disrupter to EVERYTHING
  • Industry News
  • 11.13.2017

Edge compute is the latest buzzword to describe something old that is new again. This industry is certainly cyclical and we are now coming full circle from centralized to decentralized to centralized and now edge (decentralized). The disrupting force of edge compute lies in the number of facilities that are over-engineered for the facilities based solely on a single facility mentality, not taking into account the IT capabilities.

At a recent @7x24Exchange conference, I spoke on the need for IT and facilities to meet in the middle somewhere. At the end of the day, the goal for a data center is to assure that the applications the business needs to sustain operations remain available. Over the past several years, data centers (and budgets dedicated to them) have been split into silos. One silo belonging each to facilities, networking, servers, security, storage, and sometimes others. When budgets become siloed, decisions tend to follow.

The problem with this mentality is that one hand likely is unaware of what the other hand is doing. This is true with IT and Facilities and data centers that are moving, morphing and shaping into distributed clouds of information. Colocation facilities general build to a Tier 3 or Tier 4 meaning that critical power and cooling loads are backed up in case of emergency and in some cases fully fault tolerant so that units can be maintained while maintaining redundancy. The shortfall of putting all of our data eggs in one or two of those baskets is that the single point of failure becomes the basket. There is a benefit in geographic diversity.

As applications become distributed allowing them to interact closer to the end user, the needs of those applications change. When an application is backed up with a failover server, then there are two instances of the application available. And if each of the servers has dual network connections and dual power supplies, then the single application is supported by 4 network connections and 4 power supplies. If that application then becomes distributed to 8 or 10 data centers at the edge, then the application is technically backed up by 20 network connections, 20 SAN connections, and 20 power supplies if all the servers are configured to be redundant.

Herein lies the disruption. Instead of ranking facilities by their Tier level as an industry we need a new method to rate applications by risk to the continuity of business and a function of the cost to support that application in its varied locations. This could very easily lead to a new batch of colo application support centers which will function like a colocation data center, but may be built to a Tier 1 or 2 and provide floor space at a lower price per square foot as the capital outlay for the facility will be lower than a Tier 3 or Tier 4.

We can start this by assigning a risk factor for the application. On a 1-5 scale with 5 being the highest risk, we can figure out the applications that would be most disruptive to business continuity should those applications cease to be available. Some applications will be much higher on that scale than others. Once the risk factor is assigned, we can then add multipliers or triggers for surrounding costs. This should include things like server allocations, network connections, power connections, storage allocations, core switch allocations, power, lease costs, and the like. In short, track any cost that is incurred to support an application. The equations at that point become very different. The cost of the data center cage sitting in that colo becomes allocated, too. For example:

  • Application A Risk Factor 5
  • Cost of Application (License) x number of sites
  • Allocation cost of the server (server cost / # of applications) with server cost including all hardware and software costs.
  • Network switch cost/number of actual ports (include power, maintenance, network cards, cables, and software licenses, and distributed power costs)
  • Second network ports (above)
  • Uplink ports distributed and divided to each network switch
  • SAN costs (storage allocation, power, connections, and fiber)
  • WAN connections (ports, power allocation, and carrier lines)
  • Cost per square foot or Cost per RU of rack space

As the money factors in; the decisions change greatly. More critical applications may warrant extra spend, while lower risk applications may not see the same justification. In our next blog, we will show some real examples.

Carrie (Higbie) Goetz has been involved in the computing and networking industries for over 30 years. Carrie currently works as the Principal/CTO of StrategITcom. She has a broad background in all aspects of IT as a programmer, consultant, project manager, and Fortune 500 executive running IT departments and data centers with multi-million dollar budgets, and has taught at a collegiate level. Carrie has designed data centers for enterprise, colocation, hosting and cloud facilities. She is globally published with articles in 67 countries. She is a featured keynote speaker at various international industry events, end user education forums and conferences. She holds an RCDD/NTS, CDCP, CDCS and has held 41 certifications in the industry throughout the years. She has one telecommunications patent.
Carrie Goetz
ARCHIVED Global Technology Director
ISO Logo

Excellence and Responsibility

Paige has earned ISO 9001:2015 certification for its factories in Columbus, NE and Silao, Mexico. We are dedicated to ensuring customer satisfaction and continual improvement, while embracing sustainability practices.

You can view our latest ISO Certificates here.