Impact
Flexibility through standardization
We define a blueprint of documentation and components that can be freely assembled and augmented by adopters to cater for different cloud scenarios. We embrace the diversity of existing environments and put an emphasis on standardization on critical interface layers while at the same time allowing differentiation and specialization where needed. This is supported by offering all components as open source and all interfaces and pluggable services relying on open standards.
This standardization provides huge value for all personas – software builders, and application developers, as well as service and infrastructure providers – on the different layers – Baremetal Operating System, Cloud Operating System, and Data Fabric – that we address with ApeiroRA. It allows everyone to focus on differentiating capabilities while at the same time leveraging the Apeiro blueprint for commodity functionality.
The overall toolset is rich enough to support different target landscapes, ranging from setting up complete greenfield deployments based on the Apeiro blueprint while at the same time also allowing pick-and-choose for brownfield deployments, hence allowing to leverage existing investments in hardware and software. The setup can scale from resource-optimized edge deployments to enterprise-grade high-performance data centers. On another dimension, we also see different usage patterns for pure in-house scenarios where the cloud infrastructure is operated for local purposes in a sovereign way as well as commercial and public scenarios scaling to federated cloud environments; again, with compatibility through standardization across all these patterns. Depending on the scenario, the blueprint can be used (in combination with any existing assets) to assemble completely self-contained cloud environments with a predefined and well-known set of services, mostly for edge deployments; and on the other side can allow completely open environments for custom workload handling, as long as all participants follow the open standards.
Sovereignty
The openness and standardization of the Apeiro blueprint is a key enabler to also build the foundation for a sovereign cloud infrastructure. Sovereign cloud infrastructures offer a compelling solution for organizations seeking to balance data sovereignty, security, and compliance with the flexibility and scalability of cloud services. Particularly in an era where data privacy and regulatory requirements are increasingly stringent, the Apeiro blueprint can be a good basis for any organization looking into establishing a sovereign cloud native infrastructure. Building on top of ApeiroRA helps controlling cost and complexity.
The benefits of a sovereign cloud infrastructure include these key aspects:
-
Data Sovereignty: Sovereign cloud infrastructure ensures that data is stored and processed within the geographical boundaries of a specific country or region. This compliance with local laws and regulations helps organizations avoid legal complications and ensures that sensitive data is protected according to national standards.
-
Enhanced Security: By utilizing sovereign cloud services, organizations can benefit from enhanced security measures tailored to meet local requirements. This includes robust encryption, access controls, and monitoring systems that are designed to protect data from unauthorized access and cyber threats.
-
Compliance with Regulations: Sovereign cloud providers are often well-versed in local regulations, such as GDPR in Europe. This expertise allows organizations to navigate complex compliance landscapes more effectively, reducing the risk of penalties and legal issues associated with data mishandling.
-
Trust and Transparency: Sovereign cloud infrastructure fosters trust among customers and stakeholders by ensuring transparency in data handling practices. Organizations can provide assurances that their data is managed in accordance with local laws, which can enhance customer confidence and loyalty.
-
Tailored Solutions: Sovereign cloud providers often offer customized solutions that cater to the specific needs of local businesses and industries. This localized approach can lead to better alignment with organizational goals and improved service delivery.
-
Support for Local Economies: By choosing sovereign cloud infrastructure, organizations contribute to the growth of local economies. This investment in domestic technology providers can stimulate job creation and innovation within the region.
-
Reduced Latency: Sovereign cloud infrastructure typically involves data centers located within the same region as the business users. This proximity can lead to reduced latency and improved performance for applications that require real-time data processing.
The Apeiro-Reference-Architecture Layers
We initiate and contribute to multiple open-source projects on the different layers of the overall blueprint. Contributions and enhancements of the Apeiro landscape are possible in different ways – by helping to evolve the open-source components that we describe in more detail below and by contributing completely new components that adhere to the standardized interfaces in the overall landscape and hence fit into the overall architecture.
In the following sections, we will describe the projects and components and the value they are providing in the different areas of a cloud environment.
Apeiro Cloud Operating System (COS)
Application teams that battle with the technicalities of their software deployed in the cloud also face an additional business predicament: which infrastructure provider(s) should be qualified and supported? If more than one provider is necessary to reach the desired market, the application needs to be adjusted, qualified, maybe even re-written, for every infrastructure provider that must be supported. Obviously, abstractions for dealing with multiple providers are advantageous.
In general, an Operating System is the body of software that abstracts the hardware platform, multiplexes or orchestrates workloads dynamically across available resources, and protects/isolates software principals (tenants) from each other. Historically, the invention of the OS as abstraction layer was instrumental to separate software’s tight coupling with hardware. Similarly, a distributed cloud OS with reasonable abstractions will be instrumental to enable the multi provider cloud-edge continuum.
Fortunately, the cloud-native paradigm is accompanied with well-accepted technologies and practices that assist with multi-provider portability. The Apeiro Cloud Operating System (COS) includes a reference implementation for a coherent multi-provider approach. It pragmatically utilizes Kubernetes as a lingua franca across the distributed provider resources. COS does not attempt to span a single cluster over incompatible providers. Instead, COS endorses a multi-cluster approach (divide & conquer), with project Gardener providing an enterprise-ready Kubernetes-as-a-Service.
Modern applications, including any associated Application-as-a-Service canopy, are typically built and run using a combination of cloud-native technologies, microservices, preferably using event-driven architectures, and operated with progressive release and lifecycle management. Almost all software (from stateful databases, stateless algorithms, AI workflows, …, to distributed messaging systems) is packageable in immutable containers and is often qualified or even developed specifically for deployment using the Kubernetes container orchestrator. Therefore, for most software builders, the entry point for value creation is the Kubernetes cluster, which utterly leapfrogs the infrastructure centricity. Often, Kubernetes is leveraged as platform to create domain specific platforms, thereby even abstracting away any Kubernetes-centric implementation details. Following the Dependency Inversion Principle, as application teams move up the stack, the machine-centric provider underlay (with its infrastructure-as-a-service) becomes a negligible side aspect, albeit for the lowest layer in the stack (including Kubernetes itself).
Gardener is extensible, offering participants the chance to use and adapt to their in-house infrastructure (e.g. OpenStack) or operate on various, supported cloud-providers. With the help of COS and Gardener, software qualified on Kubernetes can be ported easily or even natively run across the cloud-edge continuum. Best practices, tools and automation for secure and compliant development, lifecycle management, and infrastructure/configuration as data are part of ApeiroRA as well.
Apeiro Baremetal Operating System (BOS)
On the infrastructure layer, we offer stacks to manage cloud or edge infrastructure, a management dashboard, and flexibility with the supported hardware. Our offering is powered by a fully open-source, sovereign cloud blueprint, ensuring total control and transparency over your infrastructure.
Leveraging the Kubernetes Resource Model, our infrastructure layer provides declarative APIs for managing cloud infrastructure. Compared to traditional infrastructure solutions, this approach enables automated lifecycle management, significantly reducing the operational overhead associated with tasks such as scaling, patching, and upgrading infrastructure components. Building on a unified foundation for bare metal hardware management and storage we offer two Infrastructure-as-a-Service (IaaS) implementations:
- IronCore, a radically simplified IaaS solution tailored for cloud-native workloads based on the Kubernetes resource model.
- CobaltCore, a robust, well-known IaaS based on OpenStack is designed to support existing, non-cloud-native workloads and traditional network configurations. CobaltCore offers fully automated, compliant zero-downtime maintenance operation focusing on system uptime and performance.
Bare-metal automation layer (Metal-API)
The Metal-API is a core component responsible for managing the entire lifecycle of server hardware within the Apeiro Infrastructure Stack. Again, by leveraging the Kubernetes Resource Model, it provides declarative APIs to automate every aspect of bare-metal server management, ensuring smooth, scalable, and reliable operations.
- Discovery: Newly introduced hardware in the data center is automatically detected, inventoried, and prepared for provisioning, ensuring seamless integration into the infrastructure.
- Provisioning: Automates the operating system installation and BIOS configurations, ensuring servers are quickly ready for use.
- Maintenance: Ongoing compliance and functionality of hardware devices require regular maintenance, including firmware updates. This process is fully automated, reducing the need for manual intervention.
- Decommissioning: Securely handles the retirement of servers, including data scrubbing to ensure safe reuse or disposal of the hardware.
Apeiro Platform Mesh
The Platform Mesh is the main API gateway for users and technical services to order and orchestrate capabilities attached to the environment. Its guiding and design principle is inherited from Kubernetes’s declarative API approach with its digital twin manifests, the Kubernetes Resource Model (KRM). It utilizes and refines the upstream project KCP for its purpose. Hence, all integrations with the Platform Mesh API re-use well-known command-line tools and client SDKs (available for all popular languages), enabling the widely used, elegant cloud-native convention, which also removes barriers of entry.
The Platform Mesh’s interior goal is to streamline the integration of all internal service capabilities. Background and motivation: with the proliferation of Kubernetes-based offerings (with controllers, operators, crossplane, and similar) most essential services can be ordered by corresponding higher-level order manifests (expressed by Custom Resource Definitions) and specialized actuators/controllers materialize the service and its workload within clusters (or even elsewhere). The local order manifests definition now can be synchronized, further composed, and finally exposed as offering in the Platform Mesh, smartly by means of the same API framework language.
The exterior goal of the Platform Mesh (illuminating its naming origin) is to enable the ecosystem of multiple platform and service providers to easily cross consume and integrate with each other, by enabling the exchange of digital twins based on the same expressive API framework language. The same interior mechanism of service integration can be applied for exterior purpose, albeit with additional security concerns and rigor that typically apply when crossing legal boundaries. The Platform Mesh currently only focuses on technical aspects. Legal and commercial aspects, such as digital contracts, billing, and SLA aggregation will be addressed at a later stage.
The Platform Mesh is at a very early innovation and research stage. The concept inherits local robustness and reliability via established cloud-native practices. The Mesh’s viability and practicability are being piloted. It should be able to be retrofit on top of environments that internally use Kubernetes and KRM.
Apeiro Open Micro-Frontend-Platform
The widely accepted microservices architecture paradigm structures an overall offering as a loosely coupled, independently deployable assemblage of components/services. The integration of user interfaces of such independent components (often implemented with diverse technologies) into a streamlined and consistent graphical user interface presents a complex challenge. The Apeiro openMFP project addresses the challenge with frameworks and guidelines that allow for integration of multiple user interfaces into a holistic offering.
The Platform Mesh’s interior goal is to streamline the integration of all internal service capabilities. Background and motivation: with the proliferation of Kubernetes-based offerings (with controllers, operators, crossplane, and similar) most essential services can be ordered by corresponding higher-level order manifests (expressed by Custom Resource Definitions) and specialized actuators/controllers materialize the service and its workload within clusters (or even elsewhere). The local order manifests definition now can be synchronized, further composed, and finally exposed as offering in the Platform Mesh, smartly by means of the same API framework language.
Challenges with holistic user interface integration
Modern cloud environments are inherently complex, consisting of many services, each with its own user interface and UI technology frameworks. This diversity makes it difficult to provide a consistent user experience for both end-users and administrators. A unified and extensible platform approach is crucial to address these challenges.
Key Challenges:
- Redundant Implementations: Common functions like authentication, authorization, and permissions are often independently developed by different teams, leading to duplication, increased complexity, and higher costs.
- Integration Difficulties: Services developed in isolation are difficult to integrate seamlessly, leading to delays, reduced reliability, and higher maintenance costs, which negatively impact efficiency.
- Fragmented User Experience: Users often need to navigate multiple interfaces, which hampers productivity, increases errors, and decreases user satisfaction.
Apeiro openMFP as the Solution
The Apeiro openMFP addresses these challenges by providing a micro-frontend-platform technology that enables a unified integrative approach for the user interface. It consolidates services into a single interface, making it easier for both users and administrators to manage the respective resources. This results in a streamlined, secure, and efficient user experience.
The core components and contributions include the open Micro-Frontend-Platform openMFP framework, the support of a fine-grained authorization system based on OpenFGA, and using Platform Mesh as the control plane.
Portal Demonstration
For demonstrative purpose, the Apeiro openMFP project will be deployed, customized and configured to serve a Portal function.
The Portal shall be the primary graphical user interface for managing all integrated services offerings that are made available in the Platform Mesh. This portal allows users to monitor, manage, and configure resources through a customizable and intuitive interface. Built on open-source principles and leveraging a micro frontend architecture, the Portal ensures seamless integration, enhanced security, and optimal performance.
Apeiro Cloud Native Lifecycle Management
All resources and services in a distributed cloud are advantageously managed through an API. But often, the methods of organizing software products on the cloud are old-school, imperative, artisanal, and hand-crafted, involving human teams to accomplish the goal of software management. Adding insult to injury, most software products consist of multiple components, each typically bringing its own set of requirements, tooling, and management practices, resulting in even more manual and imperative software lifecycle processes for each individual component. The wider software industry has accepted such hardship under the term “DevOps Best Practices” and has delegated the operational challenges to DevOps experts, who’s primary task is to deliver a smooth end user experience, despite the underlying multitude of heterogeneous component tools and processes.
The cloud-edge continuum though, demands uniform mobility of software between the many providers, cloud regions, and the plethora of edges, and therefore requires a modern, fully automatable approach for software lifecycle management. With Apeiro Cloud Native Lifecycle Management, we aspire to alter the status quo for the cloud-edge continuum with novel, innovative, integrative open model providing a coordinate system, along with ready-to-use toolsets (freely extensible). All outcomes will be contributed to new and existing open standards. We aim to deliver automated, repeatable software lifecycle management across diverse technical cloud environments, including partially or fully isolated (edge) locations, while preserving the security posture at source with secure-by-default operational and compliance standards.
The following the core principles and technical implementation demonstrate the Apeiro paradigm.
Core Principles
The lifecycle management approach is built on established cloud-native patterns and technologies. The Apeiro reference forms these into an innovative, functional, and adjustable offering for all software products.
Open Component Model (OCM): OCM provides the basic model and tooling to pack, scan, ship and deploy software components of any granularity, be it micro services, large applications or even complete environments. OCM spans a directed acyclic graph of artifacts, components, and metadata, leveraging unique identifiers allowing for correlation. It is a Software Bill of Delivery (SBoD)
- enabling standardized component definitions and artifact handling (pack)
- improving security and compliance posture at source (scan)
- ensuring consistent software delivery across different platforms and environments (ship)
- allowing to instrument automation of continuous deployments (deploy)
-
Declarative Configuration- and Infrastructure-as-Data: Resources, deployments, and configurations are defined as resource manifests and desired states rather than procedural instructions. This enables automated reconciliation and reduces operational complexity.
-
Kubernetes Foundation: Building on Kubernetes’ proven architecture, we utilize reconciliation loops to continuously maintain desired states and extend native capabilities using Kubernetes’ extensibility frameworks and the Kubernetes Resource Model (KRM) as digital twin representation for all resources. In ApeiroRA, the Open Managed Control Plane (openMCP) operationalizes diverse external and internal provider capabilities in a unified control plane offering, using Crossplane-type of extensions, supplemented with adaptable policies that with the help of the unique identifiers can link for further information and context into the OCM coordinate system.
-
Automation: Complex compositions are realized by leveraging corresponding KRM-based resources, backed by respective reconciling controllers/operators. Building upon a coherent language framework allows for consistent automation in the virtual, declarative resource representation. And results in coordinated lifecycle operations to accomplish managed control over target output environments.
-
Auditable Git for Operations: Streamlining Git with its version control and extendable collaboration and processing capabilities as configurational source of truth, e.g. with GitOps techniques, for all subsequent software lifecycle instructions, enables fully auditable operations.
The core principles guide our implementation of control planes for capabilities, products and services, which may utilize different controllers/operators managed by openMFP and are released with the Open Component Model.
Apeiro Security
Security is a key concern in cloud environments, where providers hold responsibility of customers’ data and have to adhere to amplified regulatory requirements (e.g. from GDPR, CRA, to KRITIS). In a multilateral world, the control and limitation of integrated cryptography within cloud environments become increasingly important as the level of entrusted data confidentiality rises. We therefore suggest default solutions for the most common challenges, which in unison offer the flexibility to adjust to changing boundary conditions and specific requirements enforced in regulated markets and/or different jurisdictions. With Apeiro Security we provide a powerful set of tools to manage cryptographic assets, protect sensitive data, and maintain a strong and adaptable security posture in the face of emerging threats and changing requirements.
Key Management Service (KMS)
As organizations increasingly rely on cloud environments to store sensitive data, a robust Key Management Service (KMS) is essential to ensure that data remains protected through strong encryption practices. This component aims to deliver a secure and scalable solution for managing cryptographic keys, providing organizations with the ability to safeguard sensitive data through robust encryption mechanisms. KMS enables enterprises to create, manage, and control encryption keys used for protecting data at rest, ensuring compliance with stringent security and privacy standards. KMS supports key hierarchies along technical services, providers, and regional jurisdiction. This allows customers to retain control over the master key used to protect subordinate keys in the hierarchy. This architecture empowers customers to revoke access to their encrypted data, if necessary, enhancing data control and reducing the risk of unauthorized access. Key features include support for Bring Your Own Key (BYOK), where customers can import their own encryption keys, and Hold Your Own Key (HYOK), enabling customers to retain complete control and storage of their master keys. These capabilities provide flexibility for organizations to tailor their encryption strategies to meet their unique security requirements. The KMS project aims to facilitate seamless integration with existing data storage systems, ensuring that encryption and key management processes are simple, secure, and efficient.
Secrets Management & PKI
In cloud environments, effective secrets management is crucial for safeguarding sensitive information, such as API keys, database credentials, and certificates (e.g. the hierarchical cryptographic keys created in the KMS), to prevent unauthorized access and protect critical infrastructure. With Apeiro Security we will extend the open source OpenBao project with enterprise-grade features, transforming it into a comprehensive solution for secure secrets management and certificate lifecycle automation. The project will provide organizations with a service to securely store, access, and distribute secrets necessary for bootstrapping their infrastructure, especially for network constrained edge environments. OpenBao, hosted under neutral & open governance in the Linux Foundation Edge, has also been adopted in other participating IPCEI-CIS projects and ApeiroRA’s contributions promises joint development synergies.
Both capabilities can be adopted by organizations in their existing environments and will allow them to maintain strong security postures and simplify compliance with industry standards.
Crypto Agility
As cryptographic standards evolve and new vulnerabilities emerge, organizations must ensure that their cloud software can quickly adapt to maintain strong security postures. In ApeiroRA we focus on developing Crypto Agility Guidelines that enable software to seamlessly transition to new cryptographic algorithms, protocols, or standards as security requirements change. This adaptability is essential to protect sensitive data, maintain compliance, and mitigate the risks associated with outdated or vulnerable cryptographic mechanisms.
The project will include tools and recommendations for maintaining a Crypto Inventory, which catalogs all cryptographic algorithms, libraries, and versions currently in use, helping organizations identify outdated or non-compliant algorithms and ensuring that encryption practices adhere to the latest security standards. Similar to how OCM spans artifact SBoMs in its coordinate system, Crypto BoMs are envisioned to be holistically included with OCM as well.
Audit Log Standards & Integration
Ensuring comprehensive and real-time auditing is critical for maintaining runtime security, compliance, and operational visibility across modern cloud environments. This project focuses on simplifying the integration and pluggability of 3rd-party audit log services to provide organizations with a streamlined approach to monitor and record all activities and changes across their cloud infrastructure, applications, and services. The integration will leverage and extend OpenTelemetry (OTel) as the standard for observability, enabling consistent and unified logging across diverse systems. OpenTelemetry, hosted under neutral & open governance in the Cloud Native Computing Foundation, is the de-facto observability standard for all participating IPCEI-CIS projects. Our contributions with ApeiroRA clearly promises synergies and unconstrained adoption.
Apeiro Security Compliance Automation
Software today must fulfill further obligations than its plain functionality. Beside others security, privacy and legal compliance are key requirements for all components in the reference. Therefore, we include relevant tools and best practices to increase the general security posture and compliance level with automation at source. Similar to the Software Bill of Materials (SBoM), ApeiroRA introduces a Software Bill of Delivery (SBoD) for cloud-native products described with the Open Component Model (OCM). While OCM’s key feature is associated with lifecycle management, OCM based unique identities allow to correlate all audit relevant facts early in the software development and qualification process. End users can adopt and extend the OCM and its reference implementation, OCM Gear, for their specific development process.
OCM Gear is an extensible toolbox and provides the technical foundation for Apeiro Security Compliance Automation. As a central process engine, OCM Gear correlates arbitrary metadata, such as security related findings, using OCM based unique identities and enables fine granular tracking of findings along with rich context information. OCM Gear can integrate with all related existing tooling, such as malware- and vulnerability scanners, and merge metadata from any relevant external data sources into a common coordinate system. By design, OCM Gear offers an audit-safe, context aware re-prioritization concept of such findings, allowing developers to increase efficiency by sharpening their focus on genuinely relevant findings. This allows developers to minimize toil and false positive fatigue. Organizations with a previously tool-centric approach can transition to the Apeiro Security and Compliance Automation using its model-based reference. Security and Compliance Automation using its model-based reference.
Furthermore, OCM Gear allows security experts to refine and aggregate security and compliance metadata to be published and included in the product release (along with the artifacts and SBoMs) for a limited amount of customers and partners with highest security demand. For those customers and partners with OCM and OCM Gear, software providers aid with a trust-but-verify mandate, by providing the necessary and decisive metadata for verification in a transparent fashion. ApeiroRA thereby fulfills a requirement for digital sovereignty.
Apeiro Data Fabric
The Apeiro Data Fabric addresses the interoperability challenges across diverse software and service providers in the cloud, who offer complete business suites or modules, often with dedicated and incompatible technical stacks and platforms. While providers may compete with their service and cloud offerings, joint customers still expect seamless, modular integration and demand interoperability between providers.
Customers and their businesses increasingly depend on making efficient, automated or AI-driven decisions based on dispersed datasets, hosted across different clouds and providers. Therefore, adopting a data driven mindset and understanding data as valuable resource requires a standard method for unform accessibility, regardless of where the data happens to reside.
We propose a set of standards and patterns that enable easy service consumption and discovery across multiple providers. To use data effectively and reliable it needs to be easily accessible through well-defined APIs and being grouped as meaningful business objects. Understanding how these business objects are related is crucial, as it bridges the gap between raw data and actionable insights, enabling more informed and strategic decisions. The projects contributed to the data fabric provide the needed accessibility and transparency in relationships between business objects in a multi-vendor scenario.
Data Fabric Concepts
When thinking about how different systems and applications work together, customers often see business processes as moving through various independent departments, each with its own way of understanding key objects, like sales-orders or products. To put the information together we propose a decentralized self-description of application resources leading to a mesh architecture. It’s important to identify the correct business object (sales-order or product) behind any API, event, or data product. This helps define its meaning and find out which integration points are connected to the same business object. There are various ways business objects can be related, but we are introducing a standard generic option to create references across all possible scenarios.
The following three projects are the main constituents of materializing the data fabric.
Open Resource Discovery (ORD)
ORD is a protocol providing a common standard to all vendors to describe their business applications by providing the metadata for a system capability (API, event, data product) and all available business objects including a description to provide context on those entity types. With an ORD endpoint, all metadata can be exposed in a standardized format. ORD also includes practical concepts and examples of how technical services can implement a business reality over a distributed cloud-edge scenario using ORD-based semantic concepts. ORD is fully contributed as open-source software, and a technical documentation is available here: Open Resource Discovery
Unified Metadata Service (UMS)
UMS is a central service that collects all ORD-based metadata of participating services. UMS itself, having collected and aggregated all metadata of a given customer landscape, in the broader context represents a Discovery API. This facilitates the automated understanding of how business objects are related across capabilities and business applications. Ultimately, ORD and UMS enable the seamless and automatic integration between services hosted across different providers.
Landscape Resource Viewer (LRV)
The landscape resource viewer helps visualize the aggregated metadata from UMS in a user-friendly user interface. LRV supports different target personas, like developers, IT expert and data analysts to comprehend a dynamically extensible service landscape, wherein all services expose and apply to the ORD standard. The displayed relationship model will be complemented by proper filter and search functionalities to quickly understand the definition of a business object and how it is connected to other entity types within the overall customer landscape. LRV is a functionality that helps drive adoption of the ORD standard.