twitter Facebook Linkedin acp Contact Us

A successful data centre transformation

Panduit FlexFusion Cabinet. (Image source: Panduit)

To achieve a successful data centre transformation, scalable and future-proof designs and careful planning are key elements to understanding the performance requirements, potential challenges, and future needs while maintaining smooth and efficient operations, especially with the continuous dynamic technology evolution

Cloud and edge computing are changing the way data centre infrastructure is designed and managed. Edge data centres will have a presence in less traditional locations than in the past. Therefore, Edge deployments must consider a ‘self-sufficient’ approach including converged or fully integrated solutions, incorporating the IT gear, physical infrastructure, power, cooling, and security. Being flexible or scaling up existing deployments may also present a challenge as many of these systems will be dispersed and contesting for space and access.

One way of addressing this would be to incorporate some type of intelligence or visibility to these remote edge deployments. Having the ability to remotely check the power status, cooling thresholds, and ensure the IT investment is locked and secured is extremely beneficial. A primary step in this direction would include deploying the Panduit SmartZone solution, which enables a consolidated solution to address power as well as incorporate relevant environmental needs and challenges.

The Covid crisis has accelerated the use of automated tools and systems in data centres, notably where staff have been restricted access to their office location, and thus rely on remote access to monitor and address any issues that arise. To achieve more automation in the on-premise data centre, moving select workloads, tasks, and applications to cloud-based services where they can be handled remotely and autonomously is an accelerantfor on-premise data centres to reach automation goals.

Investing in DCIM software and hardware paves the way to onpremise automation, particularly when combined with emerging technology such as Artificial Intelligence (AI) and machine learning. An example is data centre managers’ ability to optimise operations and improve resilience by predicting equipment wear and tear, regulating temperatures, and predicting where human intervention may be required.

Human ‘less’ not humanless

Humans are a primary contributor to outages and issues within a data centre. By reducing the number of human interactions and manual inputs, the risk of human error is automatically reduced.

In terms of physical layer cabling infrastructure, investing in an automated or semi-automated patchcord management and labeling documentation system will help ensure switch ports stay fully utilised and promote more efficient moves, adds, and changes (MAC’s). For example, a programme could be used to issue a MAC request to a technician automatically. Once the technician completes the request, the record systems are updated automatically, reducing the margin for human error and lowering timeon-site requirements.

A survey of 500 IT executives conducted by INAP found that 85% of IT professionals anticipate their data centres will be close to full automation within the next four years. Two-thirds even go as far as to say on-premises data centres will be close to non-existent.

AI has emerged as a transformative technology, and the new direction is to become more integrated through AI engineering strategy to facilitate integration, performance, and resiliency. As we advance, other key AI trends are expected to emerge, including:

• Integrating production with Machine Learning forming MLOps.

• Making AI accessible to everyone where the concept of AI for Kids had emerged.

• Responsible AI and ethical AI mandate to mitigate dangers.

Artificial Intelligence of Things. Most medium-sized data centres are transitioning from 10G or 40G to 100G. This change means all but the shortest cabling will be fiber. 400G has also become a reality at the largest data centre providers and this comes with entirely new optical modules, higher-performing fiber cable, and potentially new interconnects. 400G will rely more heavily on 12 fiber MPO connectors and higher fiber count fiber cabling (as much as 864 fibers). It also increases the need for fiber cables, like Panduit’s Signature Core, which offers a longer reach than standard OM4 400G and introduces a need for a new 16 fiber MPO and smaller denser 2 fiber connectors like the CS product that Panduit launched in November 2020.

Regardless of which network architecture is ultimately implemented, Panduit suggests as an initial step to collectively assess both the logical and physical infrastructure. The primary reason for this approach is to fully consider what network devices need to connect to all the relevant nodes and what impact the topology will have on things such as performance, migration capabilities, port counts, cabling type, and cable routing.

There is a significant difference in the amount of cabling required when comparing two-tier and threetier topologies as well as the amount of data that traverses to and from each port or network link. Without this logical and physical coordination,

the performance and operational outputs may yield positive or negative results. Panduit helps reduce potential risks on creating and validating best practices for the customer to migrate to higher network speeds over multiple technology refreshes or upgrades.

This article was composed by: Bassel AlHalabi, managing director of Trident Technology Services, Panduit Authorized Representative in Middle East & Africa; Jeff Paliga, director global data centre business development at Panduit; Steve Morris, senior product manager, data centre solutions, Panduit; and Bob Wagner, manager group products for connectivity and pathways, Panduit.