Our generation has the unique privilege of being able to see back to within a few million years of the beginning of time.How far back one is able to look depends on the technology that is used, but our Universe’s past is there to be seen.Temporal Linked Data® (TLD) naturally keeps a record of enterprise data changes to enable another kind of time travel, the story of enterprise data, for both operational and analytic purposes.
This series of weblogs introduces TLD as a transactional, low-code, enterprise class compute and temporal data cluster that naturally projects all writes to a world class big data graph analytics platform such as: 1) third-generation graph database for analysis, machine learning, and explainable artificial intelligence by way of TigerGraph, and / or 2) enterprise knowledge graph, ML, and AI by way of ReactiveCore.
For a high-level understanding we will briefly explore these subjects.
The Value of Temporal Data, Transactionally and Analytically
Reasonable Conclusions about HTAP by way of TLD
The technological innovation represented by the BEAM ecosystem and third-generation graph database allow for the possibility of building enterprise systems that simultaneously account for operational and analytic concerns.We look forward to taking this fast-data, big-data, HTAP, Temporal Linked Data® journey with you.
This weblog is about auto-generating a high-throughput, low-latency, resilient, reliable, scale-out, infrastructure-saturating microservice application, with realtime projections for graph analytics, based solely on a set of bespoke RDF data models.The history of all writes are preserved along side their respective aggregates, providing for a temporal representation of all changes—application time travel for free.
CMB Radiation View of the Universe’s Original State, Courtesy NASA / WMAP Science Team
We call this clustered transactional capability Temporal Linked Data® (TLD).Runtime and persistence is provided through the BEAM ecosystem by way of Docker containers and container orchestration.
TLD generates well-written Elixir on world class BEAM web frameworks.Auto-generating the backend solution is accomplished by reading a set of RDF data models that represent aggregates of OWL Datatype Properties and that contain concrete links between aggregates by way of OWL Object Properties.These self-describing RDF models enable us to generate: 1) router endpoints, 2) a RESTful API with JSON payloads, 3) Elixir data modules, 4) OTP GenServer and Elixir process registry modules by way of servers and workers in a well-supervised distributed process hierarchy, and 5) change-data projections into scale-out big-data graph analytics platforms.The result is a highly concurrent, highly reliable enterprise backend.
Iteratively Deployed TLD Microservices with Projections to Big Data Graph Analytics
The above sketch depicts autonomous TLD microservices asynchronously projecting writes to a common, scale-out, big-data graph analytics platform.TLD services are intended to be delivered early and often, over time building up heterogeneous data for unique insights.
This is an auto-generated “graph throughout” solution architecture that is best described as a specific type of Hexagonal or Port-and-Adapter architecture where, by default, the transactional graph data structures flow through to the analytic graph structures to provide a realtime 360 view of the business, for The Business.
The BEAM ecosystem is built from the ground up as the antithesis of command-and-control, delivering all of the high-throughput, low-latency, resilient, scale-up, scale-out, reliable, and elastic technology attributes one would expect of a world class distributed platform.The BEAM VM implements the Actor Model as very light-weight, supervised processes that communicate asynchronously through message passing, resulting in highly reliable concurrency.BEAM VMs naturally cluster to form a compute grid to accomplish work.As a VM, garbage is collected in the most efficient way possible, at the “actor” or “process” level in alignment with its own architecture.
BEAM’s original mission was to provide for highly reliable, remote, unattended telecom devices.BEAM releases contain only the software that is actually used, providing for a very small footprint.Taken in combination, BEAM scales perfectly between IoT/IoTT devices and multi-server processes.
From a business perspective, the BEAM ecosystem saturates its infrastructure, getting the most work per dollar spent on physical hardware, virtual machines, containers, and cloud platforms.It is naturally elastic with the ability to expand and contract to accommodate the workload with the least amount of compute resource possible by saturating available cores.For those writing the checks for cloud subscriptions, this translates into real dollar savings.
Much of BEAM’s power is available through OTP, the framework that ships with the platform delivering, among other capabilities, distributed registry, process supervision, asynchronous atomic processes.BEAM/OTP also contain two database options within the platform itself.Popular languages that compile to BEAM byte code include Erlang and Elixir.
Noteworthy vendor Erlang Solutions supports native BEAM message-oriented-middleware by way of Rabbit MQ, and BEAM key-value and time series persistence through Riak KV and Riak TS, XMPP by way of MongooseIM, popular conferences, and professional services.The frameworks for accomplishing work are world class.Much more could be said, especially regarding the extraordinary talent in this community.
Robert Virding, co-inventor of Erlang and Principal Language Expert at Erlang Solutions, gives an insightful overview of BEAM’s lightweight massive concurrency, asynchronous communication, process isolation, error handling, continuous evolution of the system, and soft real-time in this 2014 Code Sync talk:
Irina Guberman provides a running example of Erlang’s fault tolerance in her talk “Unique resiliency of the Erlang VM, the BEAM and Erlang OTP” from Code BEAM SF 2020:
BRSG has chosen the BEAM ecosystem in combination with third-generation graph dataplatforms to deliver its own HTAP capability and looks forward to sharing its insights along the way.Please contact us if we can be of service, info@brsg.io or call at 303.309.6240.
It should not be controversial to observe that every new or additional technology added to a portfolio represents needed licensing, training, senior expertise, supporting infrastructure and process as it relates to CI/CD and related build pipeline and production considerations.Integration of disparate or heterogeneous tools, techniques, and technologies can compound this effect.
One way to look at this is from the perspective of architectural simplification, a value that we have long held.
For over 20 years we considered the Java ecosystem to be the best way to effectively accomplish the most while having to know the least.In spite of the corresponding personal and corporate investment, we have come to appreciate the BEAM ecosystem to an even greater degree from both a business and technological perspective.
While our own reasons for favoring BEAM will emerge in subsequent posts, we leave you with a talk by Saša Jurić that offers as good of an overview as we can imagine.For a terrific example of architectural simplification, pay special attention around minute 36 where Saša lists technologies supplanted by the BEAM ecosystem for just one realtime messaging platform.
In an effort to focus on The Business rather than technology, BRSG advocates for architectural simplification that leaves no gaps while performing the same work with much less human, infrastructure, and financial resource.Please reach out if we can be of service: info@brsg.io, or call call 303.309.6240.