100% BETTER-than-BYZANTINE FAULT TOLERANCE for Databases
© by IT/OT Chain & Cloud Australia Pty Ltd 2022-2025
You might call it ‘100% FAULT INTOLERANCE for Databases’ (Semper FID)
We took a body of work by the Bucardo Organisation as the Point of Departure for our Bucordo Project, since they had an Open Source Multi-Master PostgreSQL-based System which was built on the Perl programming language. The website is here: “Bucardo”. PostgreSQL does not natively provide any way to run All-Master Replicas.
However, after discovering BFT-SMaRt, we pivoted to the Java Language to obtain the benefits of that project (eg. Byzantine Fault Tolerance and database-scale Smart Contracts). Following those developments, including the completion of a working Bucordo System, we progressed to our latest Project “BFT-DHT Bucorde”. This is based on the Bucordo Developments, extended now to incorporate a Twinned network, meshing with Bucordo’s BFT-SMaRt, and based on jvm-libp2p’s use of Petar Maymounkov’s and David Mazieres’ Kademlia Distributed Hash Table Networking Protocol. Our aim is to eliminate use of the DNS System for locating target nodes. It means a user will “discover” their target Middleware and Database Servers, or other known peers, via their NodeIds. It also means Security is very much enhanced. “ADNS” (Absence of DNS)
From wikipedia: “The problem of obtaining Byzantine consensus was conceived and formalized by Robert Shostak, who dubbed it the interactive consistency problem. This work was done in 1978 in the context of the NASA-sponsored SIFT[9] project in the Computer Science Lab at SRI International. SIFT (for Software Implemented Fault Tolerance) was the brain child of John Wensley, and was based on the idea of using multiple general-purpose computers that would communicate through pairwise messaging in order to reach a consensus, even if some of the computers were faulty.”
By the editor: “The problem with Byzantine Fault Tolerance is that it relies on > 66% compliance with Trust by the database node operators (or in the case of a mined Blockchain, > 66% compliance with Trust by the operators of Mining computers, globally). In the case of database nodes, it is easily possible to overcome this requirement and to commit Fraud. Although in the case of any popular Blockchain system, the task would be expensive to achieve, with many Miners involved, it is still achievable by a well-resourced operation in global terms. We have turned to the Elastos/Ethereum Blockchain, to provide a guarantee in the verification and validation of the Ultimate Checking Process we employ, to secure our Database nodes against Operator (or other) ‘external’ Fraud. The process covers, at the same time, any non-malicious malfunction of machines.”
—
DISCLAIMER:
The sense in which the terms “Automated Trust”, 100% FID, etc are used here should be taken to indicate that Trust in the Owners and Operators of a Database System can be guaranteed, given that the initial Coding and Development process is Transparent to all Business Customers of the developers/SaaS (Software as a Service) providers. This is because the Business Customers (with sound Governance) actually Operate their own Member Class servers under leases from the SaaS providers. As operators, they are expected to look after their own IT interests, but they are not expected to need to Trust any Operators in the system of Database server nodes. The coded system enforces Trustworthiness in a strict way, and programmatically, amongst all operational parties. The old style of Trusting one or several Systems Administrators as Superusers with ultimate control can be over. There does, admittedly, still need to be an ultimate owner or Superuser at the root of the “tree”, however this role will be taken by pairs of individual Directors of ITOTCCA, and would only be enacted on the rare occasions of major alterations to the structure and functioning of the Database System. Almost all operations will be handled by the Database Administrators, sub-contracted under the auspices of each Member-Class consortium, with a minimum set of powers to achieve their duties effectively. It would obviously be against the interests of any Director of ITOTCCA, to participate in Fraud or Embezzlement. The culture at ITOTCCA is open, honest and transparent, but watchful.
Nevertheless, the committing of a fraudulent act by a corps of over 33% of the Member Class “sites” in a Network, acting together, is possible. Such an act would be notified automatically to all stakeholders, including all Member companies of all Member “Classes”, immediately upon the next consistency check against the Elastos Blockchain, scheduled to occur at every 24 Block committals. The Blockchains will have been programmed to record traces continuously, forming an ultimately reliable audit trail, which can be compared against the databases by starting from the last clean block on the database and obtaining (effectively) two logs of the Database Operations from that point forward, as recorded on the Elasstos blockchain when transactions are submitted, on the one hand, and as notified by the databases, after each of 24 block commitals, on the other. If the notified lists of executed and committed (or rolled-back) transactions (together with their Block Orderings) are not internally consistent, as well as consistent when cross-compared with the separately intercepted and pre-registered lists of originally submitted transactions, an Alarm is Broadcast to all Stakeholders, and the SaaS provider’s Contingency Plan is implemented immediately. The SaaS provider’s own Governance obviously requires a sound and complete Contingency Plan for such Breaches, extending to specifying Member Companies’ contractual duties to have and understand their own External Data Inconsistency Contingency Plan. It is also necessary to detect false positive indications before taking further action.
In all cases of an Alarm being broadcast, a Root Cause Analysis is required, where the SaaS provider is the responsible party, in consultation with all affected Members.
However, some Fraudulent acts committed by employees of Member companies, cannot be prevented by the SaaS provider, nor detected (in the absence of an Audit of the Company’s Accounts). That is, the responsibility for the majority of Trust situations with respect to Member company employees, falls to the employer, not the SaaS provider. This is because it is quite possible to present an appearance of externally consistent and Trustworthy transactions while indulging in Fraud, embezzlement or exfiltration of data. External Consistency is not the same as Internal Consistency. Internal Auditing and a sound Culture at the company can assist. In addition, a comprehensive actvity monitoring system, led automatically by Artificial Intelligence is possible, and would seal system security. One example of a system such as this is the ELK Stack (Elasticsearch, Logstash, Kibana) with Machine Learning. The contribution that the SaaS provider can make here is by providing the immutable and non-repudiable (client-signed) Blockchain and Database logs and records. All transactions (and not just financial ones) are accompanied by the Digital signature of the client (person), or at least of a device owned, and registered by them on the Elastos System. If agreed, we can provide access to the ‘Elastic’ ELK Stack for a low monthly price.
In summary we are guaranteeing “External Consistency”, ie external to Member companies, and by adding the ELK Stack we can extend that guarantee to your entire system. The exception to these guarantees is in extremely rare cases where False Negatives are encountered, that is when no concerns are reported, despite the existence, in reality, of concerns. We would expect to be sued over such occurrences, and we have Product Liability Insurance in place. That is not to say that we expect such an event to ever occur.
—
The following image is a representation of one node (of 13) in the recent state of our development platform. We are now heavily influenced by the prospect of automated trust being built-into a database multi-site installation. It appears it is possible to make a Postgres database behave with equal integrity and immutability to a Blockchain, by following certain development procedures, which have been published as Plain English (ie no code was published) research method summaries in a paper by IBM India (see below).
This image represents one node in the developing Automated Trust System. It is fully Industrial-strength and IoT-ready. The following image leaves out the pan-database-server Bucordo Ordering/Replication system.
The IBM paper, originating in IBM India, https://arxiv.org/pdf/1903.01919.pdf is linked to the following images. It is the inspiration for our efforts to provide an Ultimately Trustworthy Installation, in the cloud, but here shown representing the present stage of development of a multi-node assemblage using Kubernetes in the Cloud. Thus, by following the IBM research lead, we can say that in production, there would generally be one copy of the above installation per member-class, and arranged, configured and coded to ensure these databases interact as Masters in a data-intensive Blockchain-type assemblage. Therefore, in our working k3d (k3s in Docker) development model, there are 14 (2 clusters of 7 agents)for das_fuhrwerk, plus 13 of the internetworked member classes as may exist on their own Sample Business Subnet (ideally, alongside other member-classes from other Business Subnets in production) e.g. member-class(n,m) with n == business sub-network id and m == member-class id. Within member classes there may be many (or few) actual independent members (Companies). The following images detail the Bucordo Ordering & multi-Master Replication system and the Node-Red-Industrial system, in its current development phase.
We actually have a range of 13 Member Class “sites” (plus “das_fuhrwerk representing our company) implemented on a k3d setup, each connected to a development-ready, Pg-Admin4-administered, PostgreSQL replica, of the master database.
We are, finally, using the k3d assemblage to develop the BFT-SMaRt Client/Server Distributed Applications we have been planning for some time. The future is looking bright. We are receiving some assistance with the development of the base Front Ends (in Typescript), built around the Elastos.Essentials (github) package. Development of the Front End is progressing well.
At the currrent stage, we are looking at diving into the Postgres Source Code in order to begin development of the parts of the Ordering and Replication system required to complement BFT-SMaRt operations, and so that we can effectively implement the code changes specified in the IBM article.
Bucordo Emerges from the CHEIRRS Project
This project was based on Lxd, Juju, Charms and Kubernetes; merged with Cyber Republic’s Elastos Smartweb gRPC-based Blockchain and Database Server. Database ORM, reflection (at initialisation) and querying uses SQLAlchemy. Although there has been a major shift in the foundations of ‘cheirrs’, with a new discovery by us of research by IBM from 2019 which removes the need of our customers to actually invest any more than a modicum of Trust in ITOTCCA, with the concept of “Blockchain meets Database”. The blockchain end of a transaction will occur first, followed by writing or reading of data to or from Postgres. However our intention is to follow the lead of research from IBM in India, where apparently a practical way of making a database as immutable and tamper-proof as a blockchain, has been discovered, for the purposes of Smart Contract Trading and any internetworked transactions. The paper, “Blockchain Meets Database: Design and Implementation of a Blockchain Relational Database” ∗ Senthil Nathan 1 , Chander Govindarajan 1 , Adarsh Saraf 1 , Manish Sethi 2 , and Praveen Jayachandran 1 1 1 IBM Research India, 2 IBM Industry Platforms USA (snatara7, chandg12, adasaraf, praveen.j)@in.ibm.com, 2 manish.sethi1@ibm.com
.. refer to this link: https://arxiv.org/pdf/1903.01919.pdf
.. or click on the previous diagram ..
has revealed that it is possible to provide Trust in a natural way in networking between Companies, whilst basically distrusting each other, by making a network of Postgres Databases behave like a large-data-capacity blockchain. To ITOTCCA, this is revolutionary. Prospective customers will no longer have to trust our company to any extent as far as daily operational security is concerned. Neither do they have to trust other companies. The idea is to allow the machines operated by each Company-Class to keep each other honest, and to automatically report security exceptions to all Stakeholders including our Board and all member companies affected.
There is a hole in the plan, in that if globally over 33% of Ordering nodes were to be caused to act fraudulently, the factor of Trust would be destroyed. Nevertheless, elsewhere (on our website at itotchaincloud.com.au/security) we note the contribution of Rong Chen and Elastos in devising a hardware enhancement which is necessary for Blockchain Miners to be fitted with in order to be able to participate in Mining. This device ensures by design that no corps can act together in a gang of >33% ownership and/or direction. The idea is currently operational with Elastos. It appears to be possible, therefore, to utilise the Elastos Blockchain itself to provide a Fraud Check service against the database.
These developments have caused ITOTCCA to review our monolithic design, opting now for an isolation of Business Networks at the permissions level from each other and restricting the scope of connectivity to be strictly internal to each separate Business Network (of course external web-connections will still be possible, but safely shielded by whitelists under jvm-libp2p (see itotchaincloud.com.au/security). Future enterprise growth in network demand requires networks to be performance optimised and segmented (isolated) to provide access with performance. The design is now to have a range of cloud servers per Business Network, usually with one “site” per Member Class. The optimum value for the total number of member_class sites per Cluster (possibly containing several Business sub-nets, or on the contrary with one Business Network spanning more than one cluster) is not known at this stage, although the apparent limit here, revealed by developing with BFT-SMaRt, is 12 member-classes for customers + 1 (for our Das_Fuhrwerk schema in each installation).
Each member-class (represented by its own schema and dApp) would ideally take responsibility for its own virtual hardware. This requires a Governance framework to be established in each Business Sub-Net. The virtual servers are all to be linked (across all n sub-networks in an installation, i) between member-class nodes within a Master/Master replication system. All member-classes carry identical copies of the entire installation’s databases, with one strictly private Business Channel per member (company), enforced by the hardware/software in the cloud, including by network design and by the RBAC (Role Based Access Control) permissions system. Here, the machines and networks are designed with the goal of ensuring clean play all the time, following the blockchain model.
The following series of 5 diagrams may clarify some details of the Trust System being developed:
![]() |
![]() |
The previous diagram represents a “Serialised” approach to Transaction execution, ordering and committal, whilst the following diagram shows a “Parallel” approach, where Transaction Execution, Ordering and Committal occur simultaneously with other stages of the same transaction-ordering processes.
IBM’s initial research reported a minor preference in efficiency for Parallel Processing, however our chosen transaction ordering and replication system appears to work more naturally in a serial mode.
![]() |
![]() |
![]() |
You may notice that we talk in terms of Member-Classes rather than IBM’s ‘Members’ at the virtual-node level. It seems more efficient to include all Members involved in similar activities at one Member-Class DApp and Node, to enable secure sharing of computing resources. Note that here, a Member is a Company, while a Member-Class is a group of Members with similar objectives or (IT) Missions. Members in a Member-Class share IT costs, have many activities in common, belong to one Business Sub-Network (comprised of other, business-related Member-Classes), and need to establish a common Governance System for their virtual Node.
In the above IBM article, it is envisaged that a section of non-internetworked transactions would be outside the network. This implies that a company’s own employees, and their connecting devices, can be trusted more than the internetworked transactional parties. We believe this to be highly questionable.
Enter the Elastos DID Sidechain, Elastos Carrier2 and the Elastos ELA/ESC Sidechain.
To this end, and in the spirit of the Elimination of Risk where feasible, we are striving to implement a Hybrid model with 2 layers. We intend to offer a system which, for all financial, or otherwise critical, transactions or actions, records traces on the Elastos Blockchain,
This is responsive AI-led production, backed by a secure set of immutable audit trails, and based on Trust paid for by our investment in Elastos/BitCoin Merged Mining (as we pay our transaction fees) also providing a built-in insurance guarantee against a global fiat currency collapse by using this Elastos/BitCoin system. When you add to that the revolutionary ultra communications security enabled by an upgraded Kademlia system (with jvm-libp2p), the argument becomes more compelling.
The hole in the would-be iron-clad system, is the factor of trust demanded by the providers and administrators of the Centralised Cloud Database system, in a setup we call Hybrid Web2/3, which computational and network efficiencies simply do mandate. That is to say, current capacities of Networks, Devices & Blockchains are not sufficient to enable storage of Enterprise quantities of Data, On-Chain. We continue to require databases.
Our solution to the question of trust, is to follow the IBM India lead, and develop a mutually assured Trust system involving all database servers, across all business subnetworks i.e. supply chains) in an installation, which is designed to keep all players honest, and which is ultimately guaranteed by our copyright use of the Elastos Blockchain. Our methods are referred to by the package name “ChubbyChecker“. We use the Elastos Blockchain because it enables us to lay off the relatively high risk of fraud on the database networks, onto the very low risk blockchain, by recording traces of the transactions, (1) as they are intercepted and recorded as individual transaction traces, at the middleware servers, upon submission (Pre-Registration), (2) as they are recorded after ordering into a Block, at the middleware servers, and (3) as they are separately notified by the databases at every Block Commit (Post-Registration), calculating the Merkle Root of the Merkle Trees thus formed and writing the Root value to the chain, so that when the 24th block in a cycle is committed, we cross-check the records on the chain for external and internal consistency, including a comparison of state change hashes across all database nodes within installations. Any discrepancy causes a warning to be issued to all affected companies/organisations, and affected production is halted while an investigation is conducted. Unless the discrepancy represents a “false positive”, the root cause is determined and appropriate action taken, before production is resumed. All database operators understand this automated surveillance is taking place constantly, so it is unlikely, though not impossible, for malicious interference to occur, but the threat from Hacking persists. Of course these cross-checks insure against those external hackers.
Additionally, the system enables us to deal with simple machine malfunction which need not be malicious.
![]() |
![]() |
c![]() |
~ ~ ~ ~
Our solution to this question of trust, is to follow the IBM India lead, and develop a mutually assured Zero Trust system involving all database servers in blockchain-style Transaction-replication business-networks, which are designed to keep all players honest programmatically.
In an overall sense, the procedure is guaranteed by our practice of forming and recording Merkle Trees from Blocks of Transaction traces, followed by calculating the Merkle Roots of the Trees and recording them, one Merkle Root per ‘Slab’ of 24 Blocks of 1024 Transaction Traces, onto the Elastos Blockchain. This is part of the operation of the ChubbyChecker package.
The process anchors the data in a way that is immutable and re-checkable, so we can check for the presence of corruption. If inconsistency is found upon checking (which occurs preventatively on every occasion of recording a new Root, since the previous Root’s value can be extracted from the current Root value), alarms are automatically multicast to affected members. A Root Cause Analytic Procedure follows, led by our company, pending an “all-clear” and restart of interrupted production.
~ ~ ~ ~
![]() ![]() ![]() with jvm-libp2p network transport securityElastos 代币侧链
|
![]()
|
![]() Elastos Blockchain |
||||
Replicação da máquina de estado bizantina para as massas Byzantine State Machine Replication for the Masses ![]() ![]() ![]()
|
IT/OT Chain & Cloud
|
![]() authors ofwritten inMauringo: author of |
||||
|
Sensors Actuators
|
![]() authors of .. but sitting in the |
||||
For development
For production
|
|
Code GUI’s ..
|
||||
![]() Honor Ut Filiae
|
![]() |
‘Treaty Yeah’
‘Honour Not Reward’ ‘Walk As Daughters ایمان، اتحاد، نظم |
A Blockchain such as Elastos ELA/ESC represents an immutable source of Truth, alongside an Elastos Distributed Identity Blockchain (DID Sidechain). The latter will mesh with the Postgres ‘DataChain’ (or ‘BlockBase’) to provide indisputable Identification of players across the entire system (so-called Non-Repudiability of transactions). Elastos blockchains Global Network permissioned blockchains. Naturally the bulk Corporate Data will belong on the new databases, but there is a role to be played by the flexibility and mobility of the Elastos system, especially when we consider the strengths of Elastos Essentials and dApps, jvm-libp2p’s security and the DID Sidechain together. So we still intend to utilise our own adaptation of the Elastos-Smartweb-Service github repo and docker image to play the role of joining the DataChains systems to the Blockchains systems, by acting as a very smart reverse proxy webserver and database server and blockchain server; all at once. Additionally we have completed development of the financial/critical transaction registration and checking servers, for provision of Fraud Insurance against the databases. The idea is to enforce a “no corps > 33%” policy. Here a corps represents a body attempting to cause fraudulent transaction(s) to be perpetrated, by directing more than 33% of the Ordering Servers to act together. Naturally, more than one person, acting criminally, may be involved in a corps. The registration phase is to be followed by the “checking” phase, where the reported results of Database Block Committals are compared to the records registered on Elastos at the times of execution (and then committing or rolling back) of financial transactions. This enforces an honest action of execution and committing/rolling back of transactions, since the Elastos records are made both immutably and automatically, with integrity of the system being assured and guaranteed. Alarms are broadcast to all companies on any offended installation (which is still possible, though policed at 100%, and alarms broadcast upon any/every offence detection) followed by the implementation of Contingency Plans. The Database Operators know this, so although remaining vigilant, we expect no breaches. This Ultimate Checking mechanism occurs many times daily, 24/7.
As you would expect, no database operators, or members of our own company, are privy to any encryption keys.
What companies stand to actually gain from our processes, comes down to the way the Virtual Machines on the database nodes are designed to automatically keep each other honest, with the backup of ensuring Utter Honesty via our proprietary Ultimate Checking System. To explain, our System also operates automatically on those same virtual nodes, policing, at 100%, any “escapes” from the main Trust-Enforcing System (designed originally by IBM and, by our choice, using the University Of Lisbon’s BFT-SMaRt package, but only as effective as any Byzantine Fault Tolerant system can be). With criminal collusion of operators &/or hackers, it would be possible to “override” the IBM/BFT-SMaRt security design. By performing our own checks against trace records, of all original financial and otherwise critical transactions (as defined by you), made on the Elastos Blockchain, we prevent any such “escapes” from being perpetrated without knowledge of the crime(s) amongst all stakeholders. Such crimes will be automatically notified by alarms broadcast on the secure Elastos Carrier System, to all those affected, within minutes of being committed. Thus, as long as all Payment Gateways to finance facilities in Suspicious or Rogue States remain blacklisted by default at the Cloud Site, the system is secure, since database operators will not have access to alter black or whitelists – an activity in the domain of the customer company only. We add to this, the impenetrable network transport security provided by jvm-libp2p, which bypasses the traditional Domain Name System (DNS), to emerge with a guarantee of “Total External Security” of all your database and data transport networks, outside your company walls.
As explained on the Agreements page, and at the head of this page, we cannot do more than this as far as providing guarantees, since it always remains possible to defraud a company from within.
—————————————————————————–
More than 12 Member Classes in a Business Sub-Network
In the case where, for a particular business sub-network, n = Np, the total of customer Member Classes exceeds 12, the apparent BFT-SMaRt limit, we employ Load Balancers to allow intercommunication between Installations, say, Iq .. Iq+k .. Iq+K [0 <= k < K+1]. In this way the total number of Member Classes can be scaled up, beyond 12, indefinitely (as long as the networking allows this). The system works by programming the Database installations for the relevant Iq+k such that Iq+k Notifies the load balancers of all other Iq+k+x clusters of all transactions incident on itself [thus forming a “notification Broadcast”]. Each originating Member Class Server must create only one copy of its own transactions per other cluster’s load balancer. This ensures that each client-originated transaction is only copied once per original transaction onto each remaining cluster in the Business Network Group, and then naturally ordered and replicated (by the local BFT-SMaRt packages) within each Joined Cluster.
Here each Cluster load balancer Server is treated as if it were an independent source of transactions within the relevant cluster.
Notice that the process involving the decision by the first BFT-SMaRt server, in any Cluster, to cut the next Transaction Block, does not affect the validity of the asynchronous outcomes on other Clusters. This is because the Transaction-Orderings within Blocks are proprietary to the individual Clusters, and not interdependent for validity. This is not to claim that False Positives will not be generated by some edge cases in the processes involved with Joined Installations. We are intending to adapt IBM’s Serializable Snapshot Isolation-Aware Heuristic to deal with these cases.
—————————————————————————–
You may have noticed the reference to “No Operator Read Access” and “No Operator Write Access”. This means no company’s data can be accessed for READ or WRITE purposes by an operator. We were working to employ “Homomorphic” “Client Side Encryption” for all customers. However, apparently, even simple 1-second plain text operations can take more than one week to complete if homomorphic encryption methods are used. So we are returning to a regime where WRITE access by non-users (eg Operators), to user tables, is totally disabled, and there will be no need to notify Company Officers upon the logging of any READ access to a customer’s table(s), since all private data will be encrypted (non-homomorphically) with keys possessed by customer companies alone.
Our systems do require access, however, to some unencrypted datafields in order for database security checking to function properly, but with non-homomorphic client-side encryption for all fields, that do not enter into data security computations. Fortunately, the data access required by our security functions excludes most of your data, as a very tiny subset only is required by our security systems, and none of that seems confidential in any way (at least to us, at this point), being more in the realm of system-technical data. Remember that our GOOD agreement gives your data pride of place (unlike with Microsoft and Google, for example, who are scrambling currently, to be able to analyse your data for their purposes (driven by greed and the amorality of Accountancy), without decrypting it, somehow – in fact anyhow if it can be done legally. This explains the intense research being done into methods allowing a practical equivalent of homomorphic encryption).
gRPC protocols (using HTTP 2) replace the older style REST APIs, for communicating requests, and data inputs, from the client, and responses from the blockchain &/or database back to the client; all this occurs through the smart-web forwarding proxy server. Here the gRPC protocols are implemented in Typescript (at the clients’ devices) and in Java (in the Cloud). The smart-web server has Carrier2 installed onboard, guaranteeing security. Carrier2 is also installed via plugin on the Ionic dApp clients (which are developed hierarchically inside Elastos.Essentials) as well as manually on the IoT Edge client/servers (running ‘node-red’). Traditional Web2 technology is accommodated by the use of secure (HTTPS) REST APIs (eg Banking, Email).
The Internet of Things (IoT) system revolves around the IoT node-red servers on the Edge Client/Servers, being in communication with node-red server residing in the cloud and communicating with the local member-class database. The database receives notifications from the node-red server and passes on relevant IoT events to the blockchains, also communicating via the smart-web service with the IoT Admin dApp.
![]() |
![]() |
This system also has the capacity for the databases to emit orders as JSON streams, if the orders (for example) require Operational Technology Domain events to occur automatically (eg manufacturing or production control/initialisation) and to send an order summary (or some other form of production control) as a JSON dictionary-styled object to the Node-Red server, which is then interpretted using node.js. The code is converted to a JavaScript object and operated-on by Node-Red, penetrating into the OT domain safely (possibly involving the Ctrl-X movement automation platform).
The actual “Things” generating messages and signals are connected at the sites by raspberry-pi “edge” clients or similar (eg one of Canonical’s Partners offers “Industrial Pi” with Ubuntu Certification, ready for the Ubuntu “Core” reduced-footprint Operating System). The administration of the IoT system occurs on the IoT dApps, connected by node-red as well as java-grpc to the server(s) and by the IOTA onboard client to the “Tangle” for event validation (as for the node-red cloud server and the edge server). The IoT Admin dApp can be split effectively to allow networks to be under your own control and responsibility.
Both notification and actuation are possible, however with actuation, the response time of the system needs to be considered, as a cloud-based server may not satisfy the sub-millisecond requirements of some machinery. Edge-based servers and mechatronics would be needed to handle such control problems. Systems such as this are communicating Notifications to the Smart-Web Servers for database and blockchain recording purposes (including Data Analysis and AI operations) and to allow IoT Administrators to act as appropriate. As mentioned, the Postgres Database will be able to direct an Enterprise Data Item received which requires IoT (Operational Technology) actuation (eg a Sales Order for manufacturing) to the Node-Red server to be introduced into a production line. This is Flexible Manufacturing.
In summary, the electronic control required is provided either by the IoT devices themselves, or via components like Industrial Pi servers and PLC’s (Programmable Logic Controllers), locally, for groups of devices, but some types of OT process initiation, and IT data collection, can occur through the Enterprise IT system. All communications are shielded securely on networks by Elastos Carrier, requiring the Carrier-Native and its Nodejs and Cordova “wrapper” packages to allow node.js and typescript apps to access Carrier.
ITOTCCA chooses to implement the industrial version “node-red-industrial” of node-red. Both these packages enable the visual programming of connections between IoT devices, edge client/servers and the main IoT server in the cloud, including the specification of node.js functions and system calls to be generated upon events, all varietiies of networked intercommunications, ordering of connections, etc. The main differences between node-red and node-red-industrial are concerned with Industry 4.0 readiness, with the industrial version implementing such features as an interface for Bosch-Rexroth’s ctrl-X automation-of-movement Operational Technology system, for factories of the future.
Usually IoT notifications are dealt with on a “Report-by-Exception” basis, at the cloud level, however a continuous set of signals is often required at the edge level, to maintain control. For AI-driven data analysis much more data can be collected.
The technology of the fee-free IOTA “Trust Layer”, being developed for IoT, is already at version 2.0 and has plans to implement Smart Contracts (presumably fee-less also). ITOTCCA is incorporating IOTA as our provided Trust Layer of choice for IoT operations. IOTA adds a layer of reference against which all IoT actions can be validated. The absence of fees is made possible by having each transaction validate 2 others before completion. In this way there is no “Mining” involved, as such. IOTA is designed to support Industry 4.0 Standards, and is currently popular in Europe (due to the German-originated Industrie 4.0 standards, with IOTA’s headquarters in Berlin. Language translation is natively provided in German, French and Italian, as well as English). The IOTA response cycle-time is measured in seconds.
Simplified examples of production flows involving Node-Red-Industrial are shown below. Here the Head company initiates some Jobs, and we show a range of possible control and communication flows within Supplier companies in the second and third shots. Job 1 is a product test, while Job 5 entails the manufacture of 3 sub-assemblies, by 2 different supplier companies. Job 2 comprises a somewhat more complex set of Electro-Mechanical Unit Tests.
![]() |
![]() |
![]() |
We utilise one “oseer” schema per member-class, so there is one schema for each of the n x m member-classes across all sub-networks, and one for each of the F(i) non-internetworked future dApps, as well as one for each of the A, B and C dApps (Community Housing, RealEstate and convey-IT, respectively). The “oseer” schemata provide Business Process Control for each Member-Class dApp. Our schema called ‘das_fuhrwerk’ is the Head Schema, for top level customer on-boarding, usage tracking and payments, and to provide our own company’s Enterprise dApp for our accounting, planning and administration purposes.
Although “top-level” to us, in this automatic Trust scenario, ‘das_fuhrwerk’ is equal with every other member-class on every sub-network, as far as appropriate privileges are concerned, and is operated under the business processes supervision of its own “oseer” schema. The scope of control of the dApp connected to the ‘das_furhwerk’ system is identical to every other member-class, with the exception of its ability to get reports on user (employee) numbers in each month, on some sub-networks, for billing purposes.
The real basis of separation of Business Channels, is achieved via the Permissions systems including RBAC in Kubernetes, the Postgres User/Role system and the Ionic permissions system. These systems allow fine grained control of access to menus, data, etc. As the system grows, a single Business Network running separately-permissioned Business Sub-Nets, would be separated, eventually, into multiple Business Networks (with id == i), containing sub-nets (id == n), with member classes (id == m). A node would therefore have a full id of “Node(i,n,m)” with individual (“Member” or Company) business channels denoted as j on that node. A single Company’s full id would therefore be “Member(i,n,m,j) (of Member-Class (i,n,m))”. Take note that one business sub-net (“n”) is equivalent to one business system, say a single supply-chain (for example a supermarket supply chain or a petrochemical supply chain).
In an Ionic dApp, such as we use for the Front End GUI of any Elastos dApp, (either React or Angular-based), you will find manifest.json.
This file is where you can whitelist websites (including the cloud database/blockchains and server site). All websites are blacklisted by default, until you whitelist the sites required by your dApp. This enables the jvm-libp2p system to do its magic and connect in a private and ultra-secure way to websites, with no websockets exposed.
(see the Elastos Whitepaper at https://www.elastos.org/downloads/elastos_whitepaper_en.pdf).
Our CHEIRRS dApp Front End (Ionic-alone version) looks like this at the current stage of development
![]() |
Our dApp for general internetworked supply and trading business situations looks like this (The General); with the source code (such as it is), at https://bitbucket.org/itotcca/the-general/src/main/README.md
![]() |
Something of the potential of this project is seen here, in the Docker version we are also developing, mostly because we can scale this on a desktop computer far more than using Kubernetes.
![]() |
Further analysis of main Bucordo ideas:
Required Bucordo:
One Bucordo node per member or member- class, with substrate database server, the server running BFT-SMaRt Java package, each replicating from their own substrate servers to all other Bucordo nodes, which then each complete the replication, via their own BFT-SMaRt applications (to their own substrate servers) in a 6 stage process. Communication is achieved via Notifications. (Replicating to a Bucordo node involves only a notification).
Trust in operators is policed programmatically as follows:
- The first stage is to cause ordering then execution of submitted transactions in Blocks. This is the initial ordering/execution/replication process.
- The second stage is to commit or abort the same executed transactions, using IBM’s SSI (Serialisable Snapshot Isolation) heuristic to determine which transactions need to be aborted (rolled-back).
- Substrate Masters (“lagerhaus”) send their own State-Change Hashes to their own additional system tables that store brief details of Transactions and Transaction_Blocks. These Hashes must be identical across nodes. An alarm is broadcast, otherwise, to all Stakeholders. A Contingency Plan is implemented if an Alarm State exists.
- This State-Change Hash is added to the next Block, currently being formed, early in stage 2 above.
- The penultimate stage involves a Checkpointing process, which the IBM article indicates should be performed at less than the rate of once per Block-Committal (for efficient performance and optimum outcomes – mainly to minimise false positives), on the Substrate Masters, with an outcome of Rolling Back any remaining inconsistent Transactions. Inconsistent transactions at this stage raise a Warning, with Status to be confirmed at the next Ultimate check, which is broadcast to all stakeholders, and Contingency Plans are implemented. The detection of False positives is an essential part of the process at this stage.
- Before finalising the cycle, a Consistency Check
is run against the Elastos Blockchain’s records of Transaction Traces.If Stage 5 above were completed without incident, but if the Ordering services had acted in collusion with the cooperation of > 33% of Sites, or if external hackers have been involved, in order to defraud the system, this situation would still not have been detected.To achieve the detection of such a situation, Transaction Traces are recorded as objects, or “Assets”, on the Elastos Blockchain before being executed on the databases. As each Block of Transactions is executed, replicated and then committed (or aborted), after “home” Transactions are recorded, further Assets are added on the neighbouring nodes and, Block-by-Block, their Traces are registered on the Hedera chain, resulting in comprehensive lists of 1024 individual Pre-registered Transaction Trace files x 24, ie 24 x 2 files/node x N(i) – the number of nodes in Installation [i] containing original Transaction Traces formed into Blocks, with identical Trace Orderings, as the other nodes in an installation replicate and order the original transactions.
Anti-Fraud checking, will be driven by reviewing the records of the Assets (Transaction Traces) registered on the immutable Elastos Blockchain, with the intention of investigating the status of external and internal consistency of the Transaction Traces.
The content of these objects will be influenced by the status of the recorded files, with fields for “executed”, and either “rolled-back” or “committed”, “amount”, Client details, “TxUUID”, “BlSeq” (k), etc.
Notice that, across nodes, for a series of Assets with identical Original TxUUIDs, either |{executed tx}| = |{committed tx}| OR |{executed tx}| = |{rolled-back tx}|, or inconsistency exists. In all cases the length of the series with identical original TxUUIDs must equal the number of nodes in the installation being examined. Unique transaction traces, will be registered with unique TxUUIDs (driven by the Tx(UUID)s + the Node Ids generated by our own applications at Pre-Registration) at the different stages of their lives (exec, possibly rolled-back, probably committed) to form a series of records of the history of the uniquely identified transaction traces.
There are also possible “False Positive” indications. Please refer to the section below for a comprehensive list of Internal Consistency Checks. Overall the cross-checking relies on the pre-registered traces to enforce external consistency.
This completes the 6-stage production cycle, incorporating Fraud checks.
-
- When the databases are rolled out they will have a structure of schemata, tables and columns identical to each other, but also with an extra 6 new tables per installation (postgres_system.public.certs, postgres_system.public.blockstore, postgres_system.public.blocks_transactions, postgres_system.public.ledger_preregister, postgres_system.public.ledger_postregister_exec & postgres_system.public.ledger_postregister_commit), located in a postgres_system.public database/schema in the cluster.There will also be new “update_ledger_postregister_exec” & “update_ledger_postregister_commit” triggers (one of each per ordinary table throughout the databases) and trigger functions (one of each per ordinary schema in the databases), and the new “update_block_count” trigger and function (fired by the new postgres_system.public.ledger_postregister_commit table) targeting the new postgres_system.public.blockstore system table in the postgres database.The records in the postgres_system.public.ledger_postregister_exec & postgres_system.public.ledger_postregister_commit tables are updated by the firing of the “update_ledger_postregister_exec” & “update_ledger_postregister_commit” triggers. On the postgres_system.public.ledger_postregister_commit table, the “update_block_count” trigger only fires once upon committing every T unique transactions. Note that the Tx(UUID)s are created at Pre-Registration (by the receiving load balancers for the V.M.’s), and inserted into each query subsequently processed, leading to automatic inclusion of uuid’s upon replication.
The transactions are identified by their multiply-copied originating TxUUID’s combined with the node ID upon which the current TxUUID is being replicated. When postgres_system.public.ledger_postregister_commit has counted ‘T’ unique transactions, (where unique TxUUID’s are being counted on postgres_system.public.ledger_postregister_commit as the ordinary schemata have been firing their own “update_ledger_postregister_exec” & “update_ledger_postregister_commit” triggers upon executing & committing the ordered transactions.
Thus, there will be N(i) copies of each TxUUID in each of the 2 tables postgres_system.public.ledger_postregister_exec & postgres_system.public.ledger_postregister_commit – the number of nodes in the installation), “update_block_count” is fired and postgres_system.public.blockstore.BlSeq is incremented, along with updates of the other fields in postgres_system.public.blockstore, on behalf of all databases in the installation, and postgres_system.public.blockstore is locked until BFT-SMaRt has committed the ordered Block. The information included in these records is drawn from postgres_system.public.ledger_postregister_commit’s own database records.
Currently, T = 1024. - As Blocks are committed, the “postgres_system.public.blockstore” table is keeping a cyclic count of committed Blocks (in the BlSeq field), ie being updated by firings of the update_block_count trigger on the new postgres_system.public.ledger_postregister_commit table, with an enum operating on “postgres_system.public.blockstore” which runs from 0 to K-1. When that postgres_system.public.blockstore.BlSeq enum reaches a count of K-1, the enum resets, and a Notification is broadcast to the Chubby-Post-Register and ChubbyChecker containers announcing the end of an Ultimate-Registration Phase (when the number of Blocks committed reaches ‘K’), thus signalling the commencement of the next ChubbyChecker round of checks.
Currently, K = 24. - With regard to the method of recording the preregistered transaction states, the load balancer onboard each virtual machine, intercepts each transaction, and writes the required detail, encrypted, to the Elastos Blockchain as a file, also recording the returned Elastos FileID in a JSON file which is accessible to the ChubbyRegister container attached to the ‘home’ transaction database for each transaction.Upon Transaction Execution and also upon Transaction Commitment/Roll-back, the lists of a. Executed. b. Committed and. c. Aborted Transactions are written as a Block of details to Elastos, sourced from the orderer nodes rather than the databases. The associated ChubbyRegister container reads the Elastos files after Commitment of the current Block of transactions, and creates database requests from the file contents, using the list of transactions by UUID contained in the Block (as recorded in the Elastos records of content of the current Committed Block), assigning correct BlSeq and block_id values, and writing the Pre-Registered transaction details to the postgres_system.public.ledger_preregister table on the postgres database for the installation. (OpCodeExec, OpCodeCommit will all have the value “false”).
The foregoing actions occur independently on each node, thus resulting in databases which have their Pre-Registration details completely updated, and ready to be cross-checked, thus securely defining the boundaries of one Block of Pre-Registered Transaction Traces, a posteriori.
- When the databases are rolled out they will have a structure of schemata, tables and columns identical to each other, but also with an extra 6 new tables per installation (postgres_system.public.certs, postgres_system.public.blockstore, postgres_system.public.blocks_transactions, postgres_system.public.ledger_preregister, postgres_system.public.ledger_postregister_exec & postgres_system.public.ledger_postregister_commit), located in a postgres_system.public database/schema in the cluster.There will also be new “update_ledger_postregister_exec” & “update_ledger_postregister_commit” triggers (one of each per ordinary table throughout the databases) and trigger functions (one of each per ordinary schema in the databases), and the new “update_block_count” trigger and function (fired by the new postgres_system.public.ledger_postregister_commit table) targeting the new postgres_system.public.blockstore system table in the postgres database.The records in the postgres_system.public.ledger_postregister_exec & postgres_system.public.ledger_postregister_commit tables are updated by the firing of the “update_ledger_postregister_exec” & “update_ledger_postregister_commit” triggers. On the postgres_system.public.ledger_postregister_commit table, the “update_block_count” trigger only fires once upon committing every T unique transactions. Note that the Tx(UUID)s are created at Pre-Registration (by the receiving load balancers for the V.M.’s), and inserted into each query subsequently processed, leading to automatic inclusion of uuid’s upon replication.
(thus implying that
1. each Orderer Node has an Elastos Application Id &
2. that the current Block has passed the Consistency checks (Process 6)).
Following the lead of IBM, we are looking towards the BFT-SMaRt Byzantine Fault Tolerant Transaction Replication & Ordering Service, which is to be installed on each Peer Bucordo node, accepting input from the “gRPC-servlet” module on the Smartweb server. The Bucordo Ordering peers (BFT-SMaRt Client & Server) co-reside with the gRPC servlet, the Elastos Carrier/DID system and the Chubby-Post-Register and ChubbyChecker containers in several “pods” on a virtual machine altogether comprising one “node” when the database is included: there are 6 to 8 nodes per vm, and 4 vm’s on our development workstation.
Replicação da máquina de estado bizantina para as massas
Byzantine State Machine Replication for the Masses
We have incorporated a total of 29 checks (0 through 28) in ‘ChubbyChecker’.
At the completion of Registration of the Kth Block (ie at the end of each Ultimate Registration Phase), we begin the following set of 29 Checks:
Check ALL the following conditions are MET, else ALARM STATE:
(Note that currently the number of Blocks
in a single Ultimate Registration/Checking Phase, K = 24,
and there are designed to be 1024 Transactions per block)
- Check 1: Node Identities: Verify that the nodes involved in the consensus process are the expected, authorized nodes.
- Check_2. Across all transactions, in Post-Registration-execute and in Post-Registration-commit/abort phases, block sequence numbers must be continuous.
- Check_3. In the Pre-Registration files, ostensibly conitnuous among the K Blocks, the Traces cannot be divided into Blocks a priori. However by writing a block marker file to the etcd/DHT system when each block (ie decided by machinic consensus) is denoted and recording the TxUUIDs contained in the block, and writing an encrypted file with that Smart Contract’s Id, locally (ie to the vm or “installation”), ChubbyChecker can assign traces, from the Preregistration Stage, to blocks, and check Block continuity a posteriori. Using this method, on each node, the total of Preregistered (submitted) Transactions in one Ultimate Checking Phase must equal (total of TxUUID’s executed) – in the Post-Registration-execute phase – AND must equal (total of TxUUID’s committed + total of TxUUID’s aborted) in the Post-Registration-commit/abort phase, across all Post-Registered Blocks in the same Checking Phase.
- Check_4. Across all nodes, and for both Post-Registration-execute and Post-Registration-commit/abort phases, for a set of transactions in BlockSeq (k), with TxUUID = z: ([Total of TxUUID’s with opCodeExec == “true”] == [Total of TxUUID’s with opCodeCommit == “true”] OR [Total of TxUUID’s with opCodeExec == “true”] == [Total of TxUUID’s with opCodeAbort == “true”]) AND len{TxUUID = z} = N(i), the Number of nodes in the installation [i] being examined , for all (k, z) in [i]. (Transactions in k determined a posteriori from phase (c), as in Check_3).
- Check 5: Block Hash Integrity: Validate that the block hash matches the expected value, confirming that the block has not been tampered with.
- Check 6: Transaction Hash Integrity: Verify the integrity of individual transactions within the block by checking their hashes.
- Check 7: State Consistency: Ensure that the state after each block execution is consistent across all nodes, reflecting correct and deterministic execution.
- Check 8: Commit Consistency: Confirm that the blocks committed by different nodes are identical, with no divergence in the blockchain.
- Check 9: Byzantine Fault Detection: Analyze any discrepancies in node responses to detect possible Byzantine behavior.
- Check 10: Message Authenticity: Verify that all messages exchanged between nodes during the consensus process are authentic and correctly signed.
- Check 11: Network Partition Detection: Detect any network partitions or communication issues that might have influenced the consensus process.
- Check 12: Latency and Timing Consistency: Ensure that the timing of block commits and transaction execution is within expected bounds, identifying any anomalies that could indicate manipulation or delays.
- Check 13: Redundant Storage Consistency: Verify that redundant copies of transaction data and state information are consistent across nodes.
- Check 14: Recovery Point Integrity: Ensure that the recovery points or checkpoints created during the process are accurate and can be used to restore the system state if needed.
- Check_15. Across all checked transactions (Pre-Registration-submit, Post-Registration-execute and Post-Registration-commit/abort phases), each signature is traceable to a current client from the current installation’s range of member-class servers’ (ie nodes’) clients.
- Check_16. Across all checked transactions, the ‘entnum’ field in the transaction’s contents, when compared to the client’s registered ‘entnum’, must match.
- Check_17.Across all checked transactions, the ‘schema’ field in the transaction’s contents, when compared to the client’s available ‘schemata’, must match one item. These are fully qualified names ie database.schema.table.
- Check_18. The “amount” field in all financial transactions, and where amounts exist, in any other transaction, are constant across preregistration (submission), execution and committal/roll-back.
- Check_19. On each node, TxUUIDs are generated at Pre-Registration, so in each of the k BlockSeq numbers, for (a). Pre-Registration-execute phase, (b). Post-Registration-execute phase and (c). Post-Registration-commit/abort phase, each TxUUID occurs once exactly, and the TxUUIDs in (a) are “1 to 1” AND “onto” the sets for (b) and (c). See Check_3 above for method of distinguishing blocks.
- Check_20.On each node, across phases (b) and (c), [Total of TxUUID’s with opCodeExec == “true” (which should be all TxUUID’s)] == [Total TxUUID’s with opCodeCommit == “true”] + [Total TxUUID’s with opCodeAbort == “true”].
- Check_21. On each node, the total number of structs in the array of structs, spanning each BlockSeq (k), is equal to the agreed number (eg 1024) of transactions per Block, and this holds for all of (a). Pre-Registration-execute, (b). Post-Registration-execute, and (c). Post-Registration-commit/abort phases. Here the number of transactions in each BlockSeq for phases (a) and (b) is determined a posteriori by reference to phase (c), as in Check_3.
- Check_22. Across all nodes, at the Post-Registration-commit/abort phase, for a full set of transactions in BlockSeq (k): the ordering of transactions must be constant for all member class servers (nodes), and for all k under examination.
- Check_23.The value of the previous Transaction Trace Merkle Root is retrieved from the Smart Contract and compared to the value recorded on PostGIS. The values must be identical.
- Check_24.We then rebuild the Merkle Tree for the previous slab using the transaction trace database’s history and compare the rebuilt root with the one retrieved from the blockchain. If the rebuilt root differs from the blockchain root, it could indicate tampering in the underlying transaction data. This approach verifies not only the stored root but also the integrity of the transactions that contributed to it.
- Check_25.We validate that every transaction contributing to the current Merkle Tree has a corresponding entry in the metadata store. We ensure that these entries match the expected data (e.g. transaction IDs, hash values).
- Check_26.Instead of relying solely on the Merkle Root, we include “hashes” of individual transactions in the leaves of the Merkle Tree. This ensures that even if the Merkle Root is consistent, individual tampered transactions can be identified by checking their hashes.
- Check_27.We use periodic snapshots of the database, stored securely in an independent location, to cross-validate the database state. Snapshots are used to detect discrepancies that might otherwise go unnoticed due to tampering with live data. Thus we take database snapshots at the end of each slab. These snapshots are securely stored using immutable storage. We use the snapshots to rebuild and validate historical Merkle Trees
- Check_28.Proof-of-Consistency algorithms (like those used in blockchain systems) are used to ensure that:
- The current Merkle Root is consistent with the previously recorded root.
- No invalid transactions have been added between slabs.
Check_0
-
-
- consists of the requirement that, Across all nodes, after each block of transaction commit processes, all the State-Change Hashes recorded by all replicas (ie nodes) must be identical.
-
For full Security Compliance, this Check, along with Checks _1 to _28, can only be performed after all proposed database alterations have been completed.
As soon as any of these checks returns a positive Fault, an Alarm is multicast, whereupon the databases in the offended installation are shutdown gracefully, IoT operations for the relevant installation(s) are overridden and set to “Operation without Database”, and then the affected databases are restarted in ‘Investigation’ mode, with ordinary users and operators excluded.
All the preceding checks have been implemented as GO! functions within the ChubbyChecker package, without full cooperation of the database operations, until after completion of source code alterations.
The “Das_Fuhrwerk” server(s) in the offended installation(s) is the basis from which we conduct investigations (it has an ‘investigations’ table). It falls to Executive Directors and the CEO, CFO and CIO to conduct the investigations. Database, ChubbyChecker and ChubbyRegister log files are examined. The alarm will have been issued with a Code attached. The meaning of the code accompanies the announcement, and will correspond to one or more of the checks outlined above. It is not possible, obviously, to predict the course and duration of the Root Cause Analysis.
The idea is to utilise the initial buffered command line input prompts, in main.go, for trusted staff to issue Status Update broadcasts as the investigation proceeds. All data entered into the Investigations Table is reviewed by ITOTCCA directors and is made available to Officers of Companies involved in the offended Installation(s), to check transparently the logic of conclusions drawn and evidence provided. Note that the first line of examination attempts to define whether the fault constitutes a “False Positive”.
Normally, only 1 installation would be in an “offended” state at any time, however we do not rely on this. Multicasts are restricted to officers of companies within offending/offended installations.