To Cloud or Not to Cloud: That is Not the Question
The advancement of the telecoms industry from old, physical infrastructures towards cloudification and network virtualisation varies from one organisation to the next. Although some may be fully cloud-based today, others continue to rely heavily on telecoms specific hardware and on-premise data centres. This is predominantly dependent on the type and age of applications as well as their workloads. Nevertheless, whether by choice or not, there was a general consensus in the room that a portion of all enterprises, if not enterprises as a whole, would inevitably operate in the Cloud at one point or another.
Our society is heavily data centric. Indeed, in 2017, research showed that 90%* of all data generated in the world had been created in the two preceding years. That’s 2.5 quintillion bytes of data, a day. Fast-forward almost 5 years later, taking into account the growth of the Internet of Things (IoT), a pandemic that instigated a shift towards remote working and the rise of 5G, you can understand why many telcos have seen their capacity demand accelerate. In order to accommodate the colossal influx of data, they needed and still need a means to increase bandwidth and scale. Cloud and virtualisation provide flexibility and are becoming a necessity. As one attending CISO purported, “telcos should start calling themselves ‘datacoms’ because data communications are where everything is moving to”.
With that said, the question now is not so much whether telcos will work in the Cloud, but which Cloud?
Challenges to Cloudification and Public Cloud Concerns
As it stands, telcos’ virtualisation of their core network has focused on Private Clouds. While there are numerous reasons for this, it is to a great extent, down to its perceived security and control. Consider legislation and regulations surrounding data protection. With the adoption of the Cloud, the computational endpoints are decentralised, and can disappear from within one’s protective environment and move elsewhere. More importantly, the data that is being stored and transferred becomes boundary-less. Yet, the rules and regulations that govern it are not; they are largely nationalised.
Over the last three to five years, different countries and regions have brought in data localisation laws or data sovereignty rules. For instance, through GDPR, the European Union has requested that all data collected on its citizens stay within European borders. In the United States, similar demands are being made with the California Consumer Privacy Act (CCPA). As more people become aware of data privacy and security, its geolocation has increasingly been prioritised by regulators. Therefore, creating additional complexities for organisations. Should a telco choose to adopt a public cloud provider, they may be left exposed to different laws and regimes, depending on the location of its data centres. They no longer have clear visibility, nor the autonomy to choose where their data centres lie and consequently, which laws they must abide by. In other words, the organisation loses a sense of control which they might have maintained in sticking to private clouds.
The same can be said about ensuring the cloud infrastructures used are cybersecure. Embarking on any virtualisation project, and particularly, a cloudification one, is no easy feat; it is both expensive and high risk. This is especially true when we are talking about transforming an entire infrastructure, made up of technologies – routers, amplifiers, microprocessors etc. – patched together in an intricate network. Plus, there is a significant skills shortage to ensure it all runs smoothly and safely: there are few people who know how to build cloud technologies and specifically understand the ins and outs of the business in question.
A single telco network likely has to sustain numerous vendors, with unique infrastructures, security policies and standards for each. Moreover, if this is deployed as a “telco cloud” then it may not be just one cloud, but multiple, supporting different vendors. That is a lot of moving parts, not to mention they are managing masses of sensitive, private data. Add to that working with a public cloud provider, the third-party element exacerbates security concerns. The organisation has to trust that the provider does not have faulty cloud APIs, storage misconfigurations etc. Fortunately, the industry is hyper aware of the risk and do actively take steps to protect the data they manage. Nevertheless, roundtable chair, Professor Lisa Short, questions their current approach, likening it to a piece of chocolate.
Professor Short explained: “It’s a bit like the soft and hard centre of a piece of chocolate. At the moment, we are continually adding protective layers on the outside of data which is fairly gooey. Most cybersecurity solutions are about hardening the outside and preventing intrusion. But, if you bite on the chocolate, the soft core – or data – oozes out.” Indeed, many vendors appear to be building cloud tech on top of virtualised core stacks. They are using containers on top of Virtual Machines (VMs), then a Virtual Infrastructure Manager (VIM) to manage Virtual Network Functions (VNFs); thus, introducing network latency and complexity. One of the key drivers of this approach is due to the lack of confidence in container security. Although it is technically feasible for containers to sit on an Operating System (OS) or the bare metal of hardware, it does present significant security risk as a single vulnerable container could endanger an organisation more widely.
Building Cyber Resilience Through Digital Security by Design
But what if we were to change our mindset slightly and begin by hardening the fundamental infrastructure? What if we can go down to the source of insecurity and build security in by design with network operators, who are supporting our critical network infrastructures? If the centre is hardened, one can continue to add protective layers to stop intrusions, but the added benefit is that the data won’t leak out when it is bitten into.
This is exactly where CHERI and the Morello Board fit in. Through the UKRI Digital Security by Design initiative, in cooperation with the University of Cambridge, Arm and other industry players, the UK government is revolutionising today’s approach to cybersecurity by starting with the hardware. Using prototype CHERI extensions in an Arm processor, the technology could help the telco industry overcome critical vulnerabilities more efficiently, effectively and without a clunky deployment method.
In fact, C and C++ programming languages have bugs in features such as pointers, which can easily result in memory allocation overflow and in the programme accessing incorrect information. These can then be manipulated by cybercriminals for their benefit. While other languages like Rust, Java and C# might be designed to be memory-safe, they run into performance issues as it relies on additional code to perform runtime checks. CHERI, however, accesses memory through ‘capabilities’, as opposed to pointers, that cannot make out-of-bounds accesses. This means any issue with the software can be identified and addressed sooner rather than later. What’s more, it does not require software runtime checks and it can be applied to legacy programs, foregoing a complete rewrite of the software in a new language. Certainly, welcome news for the telco industry laden with legacy systems.
In adopting this method, the telecoms industry can improve overall cyber resilience and combat some of the challenges they are coming up against. For example, this route would provide organisations a good basis for compliance with the recently established Telecommunications Security Act, requiring operators to put more security measures in place. Furthermore, the vast majority of customers expect that telecom operators will take the necessary precautions to keep their data safe. Failing to do so is not only a breach in responsibility but of trust and could be detrimental on a commercial front. Conversely, building security in by design can have huge branding advantages in the same way Apple benefits from having ‘secure’ devices. Finally, having security in-built rather than an addition, helps create a stronger business case for selecting one solution over the other. It is clear security is important but when it comes down to costs, it is not always prioritised.
Granted, this is easier said than done but if the industry hopes to succeed in the years to come, whilst warding off the ever-present cyber threats, something must change. What better way to do so than by tackling the problem at its core. Organisations interested in testing out this new technology can do so by applying to Digital Security by Design through the Technology Access Programme (TAP). The Programme has already welcomed 30 companies, each of whom have been given the opportunity to experiment with the Morello Board and for those with less than 250 employees, receive £15,000 in funding. The next opportunity to apply is on the 11th of January 2023; successful companies will then be onboarded in Spring 2023.
*Source: Domo, 2017 available at https://www.domo.com/learn/infographic/data-never-sleeps-5