Low-friction data structures are those that: 1) are self-describing, 2) directly represent that which they model, and 3) do not require transformation between operations and analytics.Our graph-throughout approach to Hybrid Transactional / Analytic Processing system development encourages a direct connection between business and IT where data model creation can be collaborative.
Once the discrete, well-bounded RDF data model is designed, the backend system is generated, ready for: 1) behavioral augmentation where necessary, 2) test, and 3) deploy.Monolith and command-and-control are out, well-bounded microservice, distributed, and peer-to-peer are in.
This low-code approach helps to address the: 1) human resource impact of adopting new technology, 2) communication overhead of translating business concerns to production enterprise software, 3) adaptability of systems to business need, and 4) explicit connection between operational and analytic systems.
In contrast to relational and columnar data management which rely on implicit relationships embedded in SQL, Temporal Linked Data® (TLD) makes an explicit connection between temporal data aggregates.
Likewise, where Event Stores keep all data domain events together, TLD keeps all temporal data alongside the aggregate to which it belongs.
Explicit connections are both hard and soft, technical and social.Hybrid Transactional / Analytic Processing by way of TLD explicitly connects:
Temporal data aggregates,
Operational data with analytic data for realtime analysis,
Business and IT personnel, and
Represents the shortest path between business concept and production deployment.
Amundsen Scott South Pole Station, a low-friction location to view the night sky (NSF Public Domain Image)
Our generation has the unique privilege of being able to see back to within a few million years of the beginning of time.How far back one is able to look depends on the technology that is used, but our Universe’s past is there to be seen.Temporal Linked Data® (TLD) naturally keeps a record of enterprise data changes to enable another kind of time travel, the story of enterprise data, for both operational and analytic purposes.
This series of weblogs introduces TLD as a transactional, low-code, enterprise class compute and temporal data cluster that naturally projects all writes to a world class big data graph analytics platform such as: 1) third-generation graph database for analysis, machine learning, and explainable artificial intelligence by way of TigerGraph, and / or 2) enterprise knowledge graph, ML, and AI by way of ReactiveCore.
For a high-level understanding we will briefly explore these subjects.
The Value of Temporal Data, Transactionally and Analytically
Reasonable Conclusions about HTAP by way of TLD
The technological innovation represented by the BEAM ecosystem and third-generation graph database allow for the possibility of building enterprise systems that simultaneously account for operational and analytic concerns.We look forward to taking this fast-data, big-data, HTAP, Temporal Linked Data® journey with you.
This weblog is about auto-generating a high-throughput, low-latency, resilient, reliable, scale-out, infrastructure-saturating microservice application, with realtime projections for graph analytics, based solely on a set of bespoke RDF data models.The history of all writes are preserved along side their respective aggregates, providing for a temporal representation of all changes—application time travel for free.
CMB Radiation View of the Universe’s Original State, Courtesy NASA / WMAP Science Team
We call this clustered transactional capability Temporal Linked Data® (TLD).Runtime and persistence is provided through the BEAM ecosystem by way of Docker containers and container orchestration.
TLD generates well-written Elixir on world class BEAM web frameworks.Auto-generating the backend solution is accomplished by reading a set of RDF data models that represent aggregates of OWL Datatype Properties and that contain concrete links between aggregates by way of OWL Object Properties.These self-describing RDF models enable us to generate: 1) router endpoints, 2) a RESTful API with JSON payloads, 3) Elixir data modules, 4) OTP GenServer and Elixir process registry modules by way of servers and workers in a well-supervised distributed process hierarchy, and 5) change-data projections into scale-out big-data graph analytics platforms.The result is a highly concurrent, highly reliable enterprise backend.
Iteratively Deployed TLD Microservices with Projections to Big Data Graph Analytics
The above sketch depicts autonomous TLD microservices asynchronously projecting writes to a common, scale-out, big-data graph analytics platform.TLD services are intended to be delivered early and often, over time building up heterogeneous data for unique insights.
This is an auto-generated “graph throughout” solution architecture that is best described as a specific type of Hexagonal or Port-and-Adapter architecture where, by default, the transactional graph data structures flow through to the analytic graph structures to provide a realtime 360 view of the business, for The Business.
The BEAM ecosystem is built from the ground up as the antithesis of command-and-control, delivering all of the high-throughput, low-latency, resilient, scale-up, scale-out, reliable, and elastic technology attributes one would expect of a world class distributed platform.The BEAM VM implements the Actor Model as very light-weight, supervised processes that communicate asynchronously through message passing, resulting in highly reliable concurrency.BEAM VMs naturally cluster to form a compute grid to accomplish work.As a VM, garbage is collected in the most efficient way possible, at the “actor” or “process” level in alignment with its own architecture.
BEAM’s original mission was to provide for highly reliable, remote, unattended telecom devices.BEAM releases contain only the software that is actually used, providing for a very small footprint.Taken in combination, BEAM scales perfectly between IoT/IoTT devices and multi-server processes.
From a business perspective, the BEAM ecosystem saturates its infrastructure, getting the most work per dollar spent on physical hardware, virtual machines, containers, and cloud platforms.It is naturally elastic with the ability to expand and contract to accommodate the workload with the least amount of compute resource possible by saturating available cores.For those writing the checks for cloud subscriptions, this translates into real dollar savings.
Much of BEAM’s power is available through OTP, the framework that ships with the platform delivering, among other capabilities, distributed registry, process supervision, asynchronous atomic processes.BEAM/OTP also contain two database options within the platform itself.Popular languages that compile to BEAM byte code include Erlang and Elixir.
Noteworthy vendor Erlang Solutions supports native BEAM message-oriented-middleware by way of Rabbit MQ, and BEAM key-value and time series persistence through Riak KV and Riak TS, XMPP by way of MongooseIM, popular conferences, and professional services.The frameworks for accomplishing work are world class.Much more could be said, especially regarding the extraordinary talent in this community.
Robert Virding, co-inventor of Erlang and Principal Language Expert at Erlang Solutions, gives an insightful overview of BEAM’s lightweight massive concurrency, asynchronous communication, process isolation, error handling, continuous evolution of the system, and soft real-time in this 2014 Code Sync talk:
Irina Guberman provides a running example of Erlang’s fault tolerance in her talk “Unique resiliency of the Erlang VM, the BEAM and Erlang OTP” from Code BEAM SF 2020:
BRSG has chosen the BEAM ecosystem in combination with third-generation graph dataplatforms to deliver its own HTAP capability and looks forward to sharing its insights along the way.Please contact us if we can be of service, info@brsg.io or call at 303.309.6240.