Architecture
Megam's architecture changed by period. The 2014 OpenNebula deck and the later Vertice 1.5 documentation describe related systems, but not the exact same stack. This record keeps those phases separate instead of flattening them into a single diagram.
Megam v1
Megam v1 was a control plane for virtual machines, applications, and containers. The public docs identified Nilavu as the console UI, the API gateway as the API layer, and Vertice as the omni scheduler.
verticegateway was the API server for MegamVertice. Its README places it in the Scala era and names NSQ (New Simple Queue, a real-time distributed messaging platform), OpenJDK 8, and Cassandra in the runtime and build requirements. It also describes HMAC (Hash-based Message Authentication Code) authorization, PBKDF2 (Password-Based Key Derivation Function 2) passwords, and a master-key model for protected REST (Representational State Transfer) resources.
vertice was the core engine and scheduler. gulp was the agent that controlled application lifecycle in the cloud. Together, the public repositories support a v1 shape of UI, API server, scheduler, message queue, backing store, and lifecycle agent.
The docs and product material place that system on OpenNebula, OpenVZ, Docker, and Ceph. That made the system practical for hosting providers and private-cloud operators, but also tied it to an infrastructure substrate that later had to be explained against Kubernetes-era expectations.
The 2014 OpenNebula stack
The OpenNebula deck captures the early architecture best. It presents Megam as a code-to-cloud system across private, public, and hybrid clouds, with Java, Play, Ruby on Rails, Node.js, and Akka support.
That same deck includes TOSCA (Topology and Orchestration Specification for Cloud Applications, an OASIS standard for describing cloud workloads), OpenNebula Chef plugin, private-cloud install, and Cloud-in-a-Box material. The deck's named open-source components differ from the later Vertice 1.5 docs, so this was the 2014 OpenNebula-era stack rather than the final v1 stack.
Rio/OS
Rio/OS was the second phase: a private-cloud operating-system direction. Its public repositories show a product surface around rioos, commandcenter, autorio, aran, beedi, and ottavada, with support libraries such as metgroup, nalperion_rust, and openio-sdk-rust. The work moved toward a Rust-era system model and away from the earlier Megam control-plane shape.
Compared to Kubernetes
Megam v1 was not Kubernetes before Kubernetes. The closest comparison is:
| Megam v1 | Kubernetes-era equivalent | Difference |
|---|---|---|
verticegateway | kube-apiserver | Both exposed an API and auth boundary, but Megam's state and object model were its own. |
vertice | scheduler / controller-manager territory | Megam's scheduler was more imperative and product-specific. |
gulp | node agent territory | It controlled app lifecycle rather than running a Kubernetes reconciliation model. |
| Chef cookbooks | images / operators | Chef made early portability practical but later aged differently than image-based deployment. |
| OpenNebula/OpenVZ/Docker/Ceph substrate | cluster substrate | Megam assumed specific private-cloud building blocks rather than a Kubernetes-native control plane. |
The comparison orients modern readers; it does not claim lineage. Megam was early in private-cloud automation, while parts of its implementation model aged out as the industry moved toward declarative, image-based, reconciliation-driven systems.