In its simplest form, edge computing brings information processing and the storage of relevant data closer to the physical location in which it is being used, rather than directing everything through a central operation that may be the other side of the world. Developed as devices are increasingly connected, and reliant on cloud computing, the aim is to reduce latency, which can negatively impact application performance.
The fundamental principles of securing each of these local environments are not hugely different from protecting the centralised equivalent (although the practicalities of managing multiple cloud datacentres can increase the workload for security teams).
Data must be secured where it is stored, transmitted, and processed; applications and infrastructure need to be built using security-by-design principles to minimise the likelihood of a breach or compromise; and, because anyone can try to gain access to an internet-facing system, identity and access management is essential.
However, the decentralised nature of distributed IT systems results in many controls and compliance elements moving to third parties – and while organisations may outsource the responsibility for the controls, they cannot outsource the risk. It is critical therefore that due diligence from security and operational perspectives is carried out on any potential providers.
Many countries have legislation and regulations detailing where data must be processed, and how it can be transmitted. Therefore questions should also be asked about where the data will physically reside, to ensure the organisation is not inadvertently causing legal issues by hosting the data or applications in a country that is not permitted.
Rather than standards being defined and measured centrally, organisations adopting an edge computing approach may be reliant on the benchmarks set by an outsourcing host partner; the only indication that something untoward has occurred will be on receipt of System and Organisation Controls (SOC) 2 reports and their equivalents, designed to provide assurance that a service provider’s controls are effective. Putting governance and monitoring solutions in place will bolster control in this area; partial automation of the testing of the supplier’s controls will provide additional assurance.
Any potential incidents flagged in the satellite systems need to be monitored, reviewed and investigated in the same way as if they had occurred in the central infrastructure (although this can be challenging if the third party host does not provide sufficient data for an assessment to take place).
Ever-growing shadow IT
Perhaps a larger issue than the risk of outsourcing is that of centralised control over distributed infrastructure; shadow IT makes this particularly problematic because critical data can be resident in systems that, set up by local operations, are outside the review and assessments that more traditional procurement of IT would ensure. (The low cost and speed with which applications and infrastructure can be implemented is a huge enabler for central IT functions, but it also allows different elements of the business to purchase their own systems without the security team reviewing the service.)
For example, a local finance director might set up an ERP in the cloud to meet the specific needs of the operations in their region, but without reviewing organisational data flows, assessing the overall security of the infrastructure and considering the complete system architecture and integration.
On an individual level, many organisations put devices and applications directly into the hands of the user, removing enterprise control of who inputs what data – and increasing the risk of fraudulent behaviour. Controls must therefore be established to ensure that data quality is maintained, the integrity of the devices is not compromised, and that the flow of data is secure and cannot be abused to obtain more than is authorised.
The net result of shadow IT is that it is far harder for central security teams to determine and control where data and critical business processes reside, and what constitutes the IT estate. This has clear implications for any subsequent data breaches or incidents.
Identify IT assets
Overcoming these challenges makes the identification of an organisation’s assets more critical than ever. Security functions need to adapt to ensure this is feasible so that the mitigating activities such as patch management, application access, re-certifications, SOC reporting and incident alerting can be undertaken.
Penetration testing, or combined Network Threat and Response (NTR) / Endpoint Detection and Response (EDR) solutions, make it possible to identify all the components in a distributed IT landscape.
These details, along with security-relevant information, should be stored in a configuration management database (CMDB), with assets referenced in data flows with common identifiers. Linking the technical components of a CMDB to the business process and data flows can inform approaches to risk-based controls, but many organisations do not do this.
Standard security checks
Privileged access management (PAM) solutions help to secure the IT estate. Additional reinforcement measures include specific preventative controls on the entry or exit points of applications (such as access-driven malware checks with multi-factor authentication), while strengthening internal network traffic, for example using high levels of encryption, protects everything once it is within the internal systems.
None of these are new concepts but they are becoming increasingly significant in securing an organisation’s technology and data assets so that they are fit for the ever-evolving IT landscape.