Enterprise Networks with example

Kubernetes and YAML Files

From Assemblage to Machine

 

kubeinstallation

Figure 1

The following contains an example .yaml file for composition of a single Kubernetes cluster on the single Master Node of a Minikube Application.

“Y.A.M.L = Yet Another Meta Language”.
This example will give you – with your own database, (and your own Elastos/Trinity-Browser/Ionic Front End DApps, integrated with your database), – a development system only. There are serious security issues with this arrangement if it were exposed to the internet. You should research the networking requirements for security of a production system. We also highly recommend you comprehend the following article before proceeding to production: Running Postgresql on Kubernetes. The name of the following yaml file is optional.

[Interestingly, the Elastos Foundation is helping to save the future of Ethereum and its revolutionary smart contract blockchain, since the volume of data stored on Ethereum’s single mainchain has been threatening to choke the Ethereum system.

Elastos (along with other blockchain providers in their own systems) has opted to provide an ‘out’ for Ethereum by setting up Elastos Sidechains to handle Ethereum Smart Contracts on one of the Elastos nets. The Elastos system has virtually unlimited scalability due to the non-mainchain design, choosing to arrange the chains as branches which may be multiplied indefinitely. Beside the Ethereum Sidechain is a NEO smart contract Sidechain, now also a sustainable option for Smart Contracts Programmers on Elastos. Our own DApps will be using the Ethereum Smart Contract system. See item 8 in the above diagram.]

One does “kubectl apply -f assemblage.yaml” to instantiate the machine from an assemblage similar (somewhat simpler) to Figure 2, below. The actual title of the file is nominal.

The above diagram, Figue 1, shows a non-inter-networked, single-member-class system. Other machines involve internetworking and multiple member classes.

Some Views of a More General System

You could generalise this system for a case where, say, there were 3 ‘member-classes’ (see The General page). All member classes belong to the same Business Network, however there are essentially 3 different, but as yet unspecified, roles or ‘missions’ amongst the participants in the Network. We split the system into 3 deployment-groups – one for each member-class (role or mission) – plus the blockchain deployment.

The main difference between the 3 auth-db-app-memclass-x groups is in the structure of tables and columns in the different database schema these member classes require .. the database is almost everything .. it’s where your work begins if you follow this mode of development. In this case, we are using postgres schema-level classification (default “public”) to multiply schema to equal the number of member classes plus as many IoT classes and an Oseer class. This occurs on a single database.

You should be developing the database in docker (look up docker postgres images), not kubernetes initially. One links a PgAdmin4 container (in a “sudo docker run ..” statement) to a running docker postgres container’s network, logging in on PgAdmin4 (in a web browser) to view and work with the actual postgres database on the other container. As noted on the previous page, the central trigger function for updating your master (general) ledger is not a minor task to complete correctly, as updating must cater for transactions to be added at any real time and date (upon actual entry), but with a Transaction Date that is unrelated. Every transaction previously recorded (in the ledger accounts affected) whose Transaction Date comes after the newly inserted record’s Transaction Date, must be updated with the new amount credited or debited to the relevant accounts, and this process must ripple up through the transactions on the accounts affected, until the most recent transaction in each account is reached and updated.

You add tables and columns and primary keys initially. The PLpgSQL extension is used to code procedural functions as trigger functions on some tables (the insertion/update/deletion of a record in the table fires a trigger on the database, which runs the function associated). This is useful for automating sequences of data processing, and effectively opens the entire database to your coding and procedural needs. It is much more powerful than mere SQL. An understanding of COBOL or a similar business procedural language (with ‘complete’ language capacity) helps here.

A complete map of Associations between Tables, where a “foreign key” field in one Table links to the primary key of a record (whose id = field-contents-in-calling-Table) in an associated Table, so that data in the associated Table’s record becomes available to a DApp request that hits the calling Table first. Please beware that the process of mapping Associations is not trivial, either, and needs to be completed as thoroughly and carefully as possible. As your project progresses you will be adding to the map by defining foreign key fields in tables and pointing them in the definition to primary keys (in ONE-ONE, ONE-MANY and MANY-MANY relationships, where the latter 2 cases require join tables). Note each primary key in the database must be composed of a single field, except join tables which have a composite primary key made from the 2 primary key fields of the tables to be joined.

As mentioned elsewhere, first, second and third level database normalisation processing is recommended. It may be a headache, however it is virtually guaranteed that your database will not function as you want, without performing normalisation.

It should be fairly straightforward, extending the simple case, to decipher how to set up a kubernetes installation for any number of member classes.

Make sure you list the Service Spec first in order, before its Deployment or Stateful Set Spec, as pods require their own service to be in place in order to become contactable. (See the “kind” fields in the yaml file).

The Haskell server may be configured for multiple database schema. Note: there is a 4th member-class, plus IoT and Overseer/Administration Schemae and Dapps, which are explained below. We require one replicated, configured and programmed Redis Pod per Schema (nine here, with 4 member schemae, 1 oseer, and one IoT Schema per memclass Schema).

In the machine, every transaction will originate from a DApp session (possibly an IoT DApp) and will hit the Blockchain first to ensure integrity of the system. Subsequently, the databases (the Redis in-memory datastore and the Postgres structured, persistent RDBMS), are updated with the bulk data from the transaction. Not much data is actually stored on the blockchain. The connections between the device running the DApp in the field, and the Blockchain/Database Cloud Installation, are enclosed in Elastos’ P2P Carrier, ensuring security and neither requiring nor allowing the use of web-sockets to connect. The P2P Carrier system relies on an encrypted translation of traditional web addresses to node id’s in Carrier. In actual operation there are no “web-sockets” of the traditional insecure variety, as used everywhere else (outside of Elastos) on the internet. Encrypted node id’s are securely recorded on a blockchain, tamper-proof. A translation of a traditional web address to a node id is not permanent, instead created at each connection request.

A more concrete example to flesh out this scheme might be to imagine a supermarket supply chain system for fresh food which values the reliability, traceability and convenience of blockchain transactions.
The supermarket company would constitute one member class by itself. Let’s say the multiple-member distribution and transport member class constituted a second member class and that the very many-membered farm/garden/orchard/hothouse/smallgoods/abattoirs/seafood/poultry etc food producer member class constituted the third.

The farmers and primary producers would require their own standardised DApp – “memclass-3-dapp” – (general enough to accommodate all primary producers’ needs as well as the requirements of the supermarket and distribution networks for that DApp – especially regarding id, quality, origin and timing evidence).

The transport and distribution networks would require their own broad DApp – “memclass-1-dapp” – to cover the scheduling and tracking as well as quality assurance of fresh goods. It could also cover maintenance of vehicles, communication, driving regulations and reporting, and most things required by a transport company. The supermarket and primary producers would have an interest in the workings of this DApp to ensure and protect their own interests.

Finally, the supermarket would have a top-level retailer’s DApp – “memclass-0-dapp” – to handle shipping and all supply & quality problems for fresh produce. This DApp would need to be comprehensive enough to deal with all supply issues for/from any branch or store site, yet be centralised still in the cloud installation database. While communications over phone or text are obviously available, the details which might often be requested between and within companies, are, in a system such as this, largely available securely & automatically to all concerned parties. The system does require adequate input of data to function properly.

To match these different DApps, we create schema on the common database, one for each member class, one IoT Schema per member class and an Oseer Schema. The tables and other database objects for the member class owning the schema, are contained within the schema (similar to a directory containing other directories and files).

Naturally payments for goods and services would occur on the blockchain and on the databases (and especially in the real-world bank accounts) of the respective members (companies).

Economist Sir Donald Trudge warns that the World, let alone the US, can never repay its debt. There have been flaws in so-called Modern Monetary Theory which point at its future sustainability Sustainability of MMT. In the eventuality of a catastrophic global fiat currency crash, the Bitcoin/Elastos/Ethereum/Neo token systems could easily and conveniently replace a fiat currency system and by-pass the banking system. There is a well established market in these Electronic Coins. At such a catastrophic juncture, any suppliers who were not already set up to accept Bitcoin payments, would be wasting no time in changing over, so one could envisage a converted economic payments system in as quick time as necessary. Convincing Governments, and some Employers, Workers and Consumers to convert wages, benefits and other payments to Bitcoin may be more of a problem. It would be in the interests of each of these groups to do so, however, and this would become increasingly apparent in such times. An important consideration around this time would be the development of extremely simple, secure, single-purpose BitCoin wallets to allow some elderly and disabled people who do not use mobile phones, to use the BitCoin system.

The payments system can nevertheless be linked to an existing software installation and leased as a service for members with existing Enterprise IT Systems, or the system (with something like the chubba
Block ‘n’ Tackle operating) could be leased as a standalone, catastrophe-proof & comprehensive multi-Enterprise Accounting Package as well.

Within the global database, each member would have their own unique business channel number, amounting to an id field identifying the company/member uniquely; and each record on all their schema, including the IoT and Oseer schema, has that business channel id field attached to it, in order to separate the members’ data securely. Obviously the need to provision a global properties and control schema, in addition to the others, is satisfied in the Oseer Class of Schema and DApp.

In a real scheme there may be a need for a 4th tier/member class (mem-class-2 fitting into the arrangement so far) for the fruit, vegetable, meat, fish and poultry processing markets and plant, such as abattoirs, smallgoods factories, fish markets, poultry processors, fruit and vegetable markets, etc.

Owing to the fact that electronic sensors and recording and actuation devices will be used, it would also be wise to introduce an IoT Admin DApp and Schema, covering all member classes. This layer can process and filter lower level enterprise IoT data which can be read directly from the incoming (centralised in cloud server) data, as technical-level IoT information. Correct functioning, Quality and Regulatory Compliance are the main concerns here.

There is also a need for an Administration and Overseer DApp and Schema layer which can perform customer administration and control tasks such as database registration, customer onboarding tasks, general higher level admin, automated business process master-control, etc.

If a national network were involved, one may have to copy the structure on separate clusters across the country, integrating centrally by message queuing to the headquarters cluster continuously. The queue is enclosed by Elastos Carrier, the system guaranteeing the security of all Elastos communications on the web.

A unitised installation is better than a monolithic one, so whereas Minikube allows only a single node on the cluster, a real installation would be spread across as many clusters as there were separate (say, national) sites, so that, at least, taking down the entire system is never necessary. There are also benefits derived from redundancy when multiple nodes can be employed. In such a case a developer might choose to develop directly with kubeadm, kubectl and kubelet (which work together naturally – however also check out microk8s with or without Juju charms and multipass) instead of Minikube, so that multiple nodes and clusters may be created.

The assemblage.yaml file on this page would be suitable to apply to a development node for the purposes of working on a more general business-networked case (in the Elastos system). You need as many schemata as there are top-level Backend Apps and as many DApps as there are member classes plus an IoT layer (Schema and DApp – one) and Overseer layers (one per network and a Head Oseer). The only additional requirement in architecture is the need to create and develop multiple schemata within the database, and to configure the webserver to handle this arrangement. The Elastos DApps are coded in the Elastos.Essentials/Ionic development system (see previous page).

This works on Minikube with single node.

You will have to research and set up your own multi-node cluster using kubeadm etc. .. and actually after further experience you may decide instead to try microk8s.io as a simpler multi-node platform that still conforms to kubernetes specs and produces “upstream”, cloud-ready, installable kubernetes software. (Don’t install microk8s on the host for multiple nodes – use “multipass” on host to create workable virtual machines, and install microk8s on the virtual machines, with one planned as Master Node which you name ‘primary’). Microk8s and Multipass are Products by Canonical, Publishers of the Ubuntu Linux Distribution.

assemblage.yaml

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:aggregated-metrics-reader
  labels:
    rbac.authorization.k8s.io/aggregate-to-view: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
  resources: ["pods", "nodes"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: k8s.gcr.io/metrics-server-amd64:v0.3.6
        imagePullPolicy: IfNotPresent
        args:
          - --cert-dir=/tmp
          - --secure-port=4443
        ports:
        - name: main-port
          containerPort: 4443
          protocol: TCP
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
      nodeSelector:
        kubernetes.io/os: linux
        kubernetes.io/arch: "amd64"
---
apiVersion: v1
kind: Service
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/name: "Metrics-server"
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: metrics-server
  ports:
  - port: 443
    protocol: TCP
    targetPort: main-port
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  labels: 
    type: local
  name: pgsql-voluma
spec: 
  accessModes: 
    - ReadWriteOnce
  capacity: 
    storage: 5Gi
  hostPath: 
    path: "/mnt/data/postgresql"
  storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  labels: 
    type: local
  name: pgsql-file-voluma
spec: 
  accessModes: 
    - ReadWriteOnce
  capacity: 
    storage: 100Mi
  hostPath: 
    path: "/mnt/data/postgresql-file"
  storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  labels: 
    type: local
  name: redis-oseer-voluma
spec: 
  accessModes: 
    - ReadWriteOnce
  capacity: 
    storage: 3.9Gi
  hostPath: 
    path: "/mnt/data/redis-oseer"
  storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  labels: 
    type: local
  name: redis-iot-oseer-voluma
spec: 
  accessModes: 
    - ReadWriteOnce
  capacity: 
    storage: 3.85Gi
  hostPath: 
    path: "/mnt/data/redis-iot-oseer"
  storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  labels: 
    type: local
  name: redis-iot-0-voluma
spec: 
  accessModes: 
    - ReadWriteOnce
  capacity: 
    storage: 3.8Gi
  hostPath: 
    path: "/mnt/data/redis-iot-0"
  storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  labels: 
    type: local
  name: redis-iot-1-voluma
spec: 
  accessModes: 
    - ReadWriteOnce
  capacity: 
    storage: 3.7Gi
  hostPath: 
    path: "/mnt/data/redis-iot-1"
  storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  labels: 
    type: local
  name: redis-iot-2-voluma
spec: 
  accessModes: 
    - ReadWriteOnce
  capacity: 
    storage: 3.6Gi
  hostPath: 
    path: "/mnt/data/redis-iot-2"
  storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  labels: 
    type: local
  name: redis-iot-3-voluma
spec: 
  accessModes: 
    - ReadWriteOnce
  capacity: 
    storage: 3.5Gi
  hostPath: 
    path: "/mnt/data/redis-iot-3"
  storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  labels: 
    type: local
  name: red-mclass-0-voluma
spec: 
  accessModes: 
    - ReadWriteOnce
  capacity: 
    storage: 3.4Gi
  hostPath: 
    path: "/mnt/data/red-mclass-0"
  storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  labels: 
    type: local
  name: red-mclass-1-voluma
spec: 
  accessModes: 
    - ReadWriteOnce
  capacity: 
    storage: 3.3Gi
  hostPath: 
    path: "/mnt/data/red-mclass-1"
  storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  labels: 
    type: local
  name: red-mclass-2-voluma
spec: 
  accessModes: 
    - ReadWriteOnce
  capacity: 
    storage: 3.2Gi
  hostPath: 
    path: "/mnt/data/red-mclass-2"
  storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolume
metadata: 
  labels: 
    type: local
  name: red-mclass-3-voluma
spec: 
  accessModes: 
    - ReadWriteOnce
  capacity: 
    storage: 3.1Gi
  hostPath: 
    path: "/mnt/data/red-mclass-3"
  storageClassName: local-storage
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pgsql-app-claim
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pgsql-app-file-claim
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-oseer-app-claim
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3.9Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-iot-oseer-app-claim
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3.85Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-iot-0-app-claim
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3.8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-iot-1-app-claim
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3.7Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-iot-2-app-claim
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3.6Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-iot-3-app-claim
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3.5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: red-mclass-0-app-claim
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3.4Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: red-mclass-1-app-claim
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3.3Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: red-mclass-2-app-claim
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3.2Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: red-mclass-3-app-claim
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3.1Gi
---
apiVersion: v1
kind: Service
metadata:
  name: postgres
  labels:
    app: postgres
spec:
  ports:
  - port: 5432
    name: your-app
  clusterIP: None
  selector:
    app: postgres
---
apiVersion: v1
kind: Service
metadata:
  name: haskell
  labels:
    app: haskell
spec:
  type: LoadBalancer
  selector:
    name: haskell
  ports:
    - name: webserve-https
      protocol: TCP
      port: 3000
      targetPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: redis-oseer
  labels:
    app: redis-seer
spec:
  type: LoadBalancer
  selector:
    name: redis-oseer
  ports:
    - name: redis-oseer
      protocol: TCP
      port: 6379
      targetPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: redis-iot-oseer
  labels:
    app: redis-iot-oseer
spec:
  type: LoadBalancer
  selector:
    name: redis-iot-oseer
  ports:
    - name: redis-iot-oseer
      protocol: TCP
      port: 6379
      targetPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: redis-iot-0
  labels:
    app: redis-iot-0
spec:
  type: LoadBalancer
  selector:
    name: redis-iot-0
  ports:
    - name: redis-iot-0
      protocol: TCP
      port: 6379
      targetPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: redis-iot-1
  labels:
    app: redis-iot-1
spec:
  type: LoadBalancer
  selector:
    name: redis-iot-1
  ports:
    - name: redis-iot-1
      protocol: TCP
      port: 6379
      targetPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: redis-iot-2
  labels:
    app: redis-iot-2
spec:
  type: LoadBalancer
  selector:
    name: redis-iot-2
  ports:
    - name: redis-iot-2
      protocol: TCP
      port: 6379
      targetPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: redis-iot-3
  labels:
    app: redis-iot-3
spec:
  type: LoadBalancer
  selector:
    name: redis-iot-3
  ports:
    - name: redis-iot-3
      protocol: TCP
      port: 6379
      targetPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: red-mclass-0
  labels:
    app: red-mclass-0
spec:
  type: LoadBalancer
  selector:
    name: red-mclass-0
  ports:
    - name: red-mclass-0
      protocol: TCP
      port: 6379
      targetPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: red-mclass-1
  labels:
    app: red-mclass-1
spec:
  type: LoadBalancer
  selector:
    name: red-mmclass-1
  ports:
    - name: red-mclass-1
      protocol: TCP
      port: 6379
      targetPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: red-mclass-2
  labels:
    app: red-mclass-2
spec:
  type: LoadBalancer
  selector:
    name: red-mclass-2
  ports:
    - name: red-mclass-2
      protocol: TCP
      port: 6379
      targetPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: red-mclass-3
  labels:
    app: red-mclass-3
spec:
  type: LoadBalancer
  selector:
    name: red-mclass-3
  ports:
    - name: red-mclass-3
      protocol: TCP
      port: 6379
      targetPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: elastos-blockchains
  labels:
    app: elastos-blockchains
spec:
  type: LoadBalancer
  selector:
    app: elastos-blockchains
  ports:
    - name: mainchain-1
      protocol: TCP
      port: 21334
      targetPort: 21334
    - name: mainchain-2-1
      protocol: TCP
      port: 20333
      targetPort: 20333
    - name: mainchain-2-2
      protocol: TCP
      port: 20334
      targetPort: 20334
    - name: mainchain-2-3
      protocol: TCP
      port: 20335
      targetPort: 20335
    - name: mainchain-2-4
      protocol: TCP
      port: 20336
      targetPort: 20336
    - name: mainchain-2-5
      protocol: TCP
      port: 20337
      targetPort: 20337
    - name: mainchain-2-6
      protocol: TCP
      port: 20338
      targetPort: 20338
    - name: mainchain-2-7
      protocol: TCP
      port: 20339
      targetPort: 20339
    - name: mainchain-3
      protocol: TCP
      port: 21336
      targetPort: 21336
    - name: didchain-1
      protocol: TCP
      port: 21604
      targetPort: 21604
    - name: didchain-2-1
      protocol: TCP
      port: 20604
      targetPort: 20604
    - name: didchain-2-2
      protocol: TCP
      port: 20605
      targetPort: 20605
    - name: didchain-2-3
      protocol: TCP
      port: 20606
      targetPort: 20606
    - name: didchain-2-4
      protocol: TCP
      port: 20607
      targetPort: 20607
    - name: didchain-2-5
      protocol: TCP
      port: 20608
      targetPort: 20608
    - name: didchain-3
      protocol: TCP
      port: 21606
      targetPort: 21606
    - name: tokenchain-1
      protocol: TCP
      port: 21614
      targetPort: 21614
    - name: tokenchain-2-1
      protocol: TCP
      port: 20614
      targetPort: 20614
    - name: tokenchain-2-2
      protocol: TCP
      port: 20615
      targetPort: 20615
    - name: tokenchain-2-3
      protocol: TCP
      port: 20616
      targetPort: 20616
    - name: tokenchain-2-4
      protocol: TCP
      port: 20617
      targetPort: 20617
    - name: tokenchain-2-5
      protocol: TCP
      port: 20618
      targetPort: 20618
    - name: tokenchain-3
      protocol: TCP
      port: 21616
      targetPort: 21616
    - name: ethchain-1
      protocol: TCP
      port: 20635
      targetPort: 20635
    - name: ethchain-2
      protocol: TCP
      port: 20638
      targetPort: 20638
    - name: ethchain-3
      protocol: TCP
      port: 21634
      targetPort: 20634
    - name: ethchain-4
      protocol: TCP
      port: 21636
      targetPort: 20636
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: your-app
spec:
  selector:
    matchLabels:
      app: postgres
  serviceName: "postgres"
  replicas: 2
  template:
    metadata:
      labels:
        app: postgres
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: postgres
        imagePullPolicy: IfNotPresent
        image: postgres:alpine
        ports:
        - containerPort: 5432
          name: your-app
        volumeMounts:
        - name: pgsql-voluma
          mountPath: /var/lib/postgresql/data/
        - name: pgsql-file-voluma
          mountPath: /data/your-db-backup.sql
          subPath: your-db-backup.sql
        env:
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: your-postgres-key-kf6d9tgt49
              key: username
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: your-postgres-key-kf6d9tgt49
              key: password
        - name: POSTGRES_DB
          valueFrom:
            secretKeyRef:
              name: your-postgres-key-kf6d9tgt49
              key: database
      volumes:
        - name: pgsql-voluma          
          persistentVolumeClaim:
            claimName: pgsql-app-claim
        - name: pgsql-file-voluma         
          persistentVolumeClaim:
            claimName: pgsql-app-file-claim
      imagePullSecrets:
        - name: your-docker-key
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: haskell
spec:
  replicas: 2
  selector:
    matchLabels:
      app: haskell
  template:
    metadata:
      labels:
        app: haskell
    spec:
      containers:
        - name: postgrest-app
          imagePullPolicy: IfNotPresent
          image: postgrest/postgrest:1
          env:
            - name: DB_URI
              valueFrom:
                secretKeyRef:
                  name: your-haskell-key-8b2hhmbm77
                  key: db-uri
            - name: DB_SCHEMA
              valueFrom:
                secretKeyRef:
                  name: your-haskell-key-8b2hhmbm77
                  key: db-schema
            - name: DB_ANON_ROLE
              valueFrom:
                secretKeyRef:
                  name: your-haskell-key-8b2hhmbm77
                  key: db-anon-role
          ports:
          - containerPort: 3000
            name: webserve-https
      imagePullSecrets:
        - name: your-docker-key
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-oseer
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis-oseer
  template:
    metadata:
      labels:
        app: redis-oseer
    spec:
      containers:
        - name: redis-oseer-app
          imagePullPolicy: IfNotPresent
          image: redis:alpine
          env:
            - name: GET_HOSTS_FROM
              value: dns
          ports:
          - containerPort: 6379
            name: redis-oseer
          volumeMounts:
            - mountPath: /data
              name: redis-oseer-voluma
      volumes:
        - name: redis-oseer-voluma          
          persistentVolumeClaim:
            claimName: redis-oseer-app-claim
      imagePullSecrets:
        - name: your-docker-key
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-iot-oseer
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis-iot-oseer
  template:
    metadata:
      labels:
        app: redis-iot-oseer
    spec:
      containers:
        - name: redis-iot-oseer-app
          imagePullPolicy: IfNotPresent
          image: redis:alpine
          env:
            - name: GET_HOSTS_FROM
              value: dns
          ports:
          - containerPort: 6380
            name: redis-iot-oseer
          volumeMounts:
            - mountPath: /data
              name: redis-iot-oseer-voluma
      volumes:
        - name: redis-iot-oseer-voluma          
          persistentVolumeClaim:
            claimName: redis-iot-oseer-app-claim
      imagePullSecrets:
        - name: your-docker-key
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-iot-0
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis-iot-0
  template:
    metadata:
      labels:
        app: redis-iot-0
    spec:
      containers:
        - name: redis-iot-0-app
          imagePullPolicy: IfNotPresent
          image: redis:alpine
          env:
            - name: GET_HOSTS_FROM
              value: dns
          ports:
          - containerPort: 6380
            name: redis-iot-0
          volumeMounts:
            - mountPath: /data
              name: redis-iot-0-voluma
      volumes:
        - name: redis-iot-0-voluma          
          persistentVolumeClaim:
            claimName: redis-iot-0-app-claim
      imagePullSecrets:
        - name: your-docker-key
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-iot-1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis-iot-1
  template:
    metadata:
      labels:
        app: redis-iot-1
    spec:
      containers:
        - name: redis-iot-1-app
          imagePullPolicy: IfNotPresent
          image: redis:alpine
          env:
            - name: GET_HOSTS_FROM
              value: dns
          ports:
          - containerPort: 6380
            name: redis-iot-1
          volumeMounts:
            - mountPath: /data
              name: redis-iot-1-voluma
      volumes:
        - name: redis-iot-1-voluma          
          persistentVolumeClaim:
            claimName: redis-iot-1-app-claim
      imagePullSecrets:
        - name: your-docker-key
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-iot-2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis-iot-2
  template:
    metadata:
      labels:
        app: redis-iot-2
    spec:
      containers:
        - name: redis-iot-2-app
          imagePullPolicy: IfNotPresent
          image: redis:alpine
          env:
            - name: GET_HOSTS_FROM
              value: dns
          ports:
          - containerPort: 6380
            name: redis-iot-2
          volumeMounts:
            - mountPath: /data
              name: redis-iot-2-voluma
      volumes:
        - name: redis-iot-2-voluma          
          persistentVolumeClaim:
            claimName: redis-iot-2-app-claim
      imagePullSecrets:
        - name: your-docker-key
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-iot-3
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis-iot-3
  template:
    metadata:
      labels:
        app: redis-iot-3
    spec:
      containers:
        - name: redis-iot-3-app
          imagePullPolicy: IfNotPresent
          image: redis:alpine
          env:
            - name: GET_HOSTS_FROM
              value: dns
          ports:
          - containerPort: 6380
            name: redis-iot-3
          volumeMounts:
            - mountPath: /data
              name: redis-iot-3-voluma
      volumes:
        - name: redis-iot-3-voluma          
          persistentVolumeClaim:
            claimName: redis-iot-3-app-claim
      imagePullSecrets:
        - name: your-docker-key
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
spec:
  replicas: 2
  selector:
    matchLabels:
      app: red-mclass-0
  template:
    metadata:
      labels:
        app: red-mclass-0
    spec:
      containers:
        - name: red-mclass-0-app
          imagePullPolicy: IfNotPresent
          image: redis:alpine
          env:
            - name: GET_HOSTS_FROM
              value: dns
          ports:
          - containerPort: 6381
            name: red-mclass-0
          volumeMounts:
            - mountPath: /data
              name: red-mclass-0-voluma
      volumes:
        - name: red-mclass-0-voluma          
          persistentVolumeClaim:
            claimName: red-mclass-0-app-claim
      imagePullSecrets:
        - name: your-docker-key
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: red-mclass-1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: red-mclass-1
  template:
    metadata:
      labels:
        app: red-mclass-1
    spec:
      containers:
        - name: red-mclass-1-app
          imagePullPolicy: IfNotPresent
          image: redis:alpine
          env:
            - name: GET_HOSTS_FROM
              value: dns
          ports:
          - containerPort: 6382
            name: red-mclass-1
          volumeMounts:
            - mountPath: /data
              name: red-mclass-1-voluma
      volumes:
        - name: red-mclass-1-voluma          
          persistentVolumeClaim:
            claimName: red-mclass-1-app-claim
      imagePullSecrets:
        - name: your-docker-key
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: red-mclass-2
spec:
  replicas: 2
  selector:
    matchLabels:
      app: red-mclass-2
  template:
    metadata:
      labels:
        app: red-mclass-2
    spec:
      containers:
        - name: red-mclass-2-app
          imagePullPolicy: IfNotPresent
          image: redis:alpine
          env:
            - name: GET_HOSTS_FROM
              value: dns
          ports:
          - containerPort: 6383
            name: red-mclass-2
          volumeMounts:
            - mountPath: /data
              name: red-mclass-2-voluma
      volumes:
        - name: red-mclass-2-voluma          
          persistentVolumeClaim:
            claimName: red-mclass-2-app-claim
      imagePullSecrets:
        - name: your-docker-key
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: red-mclass-3
spec:
  replicas: 2
  selector:
    matchLabels:
      app: red-mclass-3
  template:
    metadata:
      labels:
        app: red-mclass-3
    spec:
      containers:
        - name: red-mclass-3-app
          imagePullPolicy: IfNotPresent
          image: redis:alpine
          env:
            - name: GET_HOSTS_FROM
              value: dns
          ports:
          - containerPort: 6384
            name: red-mclass-3
          volumeMounts:
            - mountPath: /data
              name: red-mclass-3-voluma
      volumes:
        - name: red-mclass-3-voluma          
          persistentVolumeClaim:
            claimName: red-mclass-3-app-claim
      imagePullSecrets:
        - name: your-docker-key
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: elastos-blockchains
spec:
  replicas: 2
  selector:
    matchLabels:
      app: elastos-blockchains
  template:
    metadata:
      labels:
        app: elastos-blockchains
    spec:
      containers:
        - name: mainchain
          imagePullPolicy: IfNotPresent
          image: cyberrepublic/elastos-mainchain-node:v0.3.7
          ports:
          - containerPort: 21334
            name: mainchain-1
          - containerPort: 20333
            name: mainchain-2-1
          - containerPort: 20334
            name: mainchain-2-2
          - containerPort: 20225
            name: mainchain-2-3
          - containerPort: 20336
            name: mainchain-2-4
          - containerPort: 20337
            name: mainchain-2-5
          - containerPort: 20338
            name: mainchain-2-6
          - containerPort: 20339
            name: mainchain-2-7
          - containerPort: 21336
            name: mainchain-3
        - name: sidechaindid
          imagePullPolicy: IfNotPresent
          image: cyberrepublic/elastos-sidechain-did-node:v0.1.2
          ports:
          - containerPort: 21604
            name: didchain-1 
          - containerPort: 20604
            name: didchain-2-1
          - containerPort: 20605
            name: didchain-2-2
          - containerPort: 20606
            name: didchain-2-3
          - containerPort: 20607
            name: didchain-2-4
          - containerPort: 20608
            name: didchain-2-5
          - containerPort: 21606
            name: didchain-3
        - name: sidechaintoken
          imagePullPolicy: IfNotPresent
          image: cyberrepublic/elastos-sidechain-token-node:v0.1.2
          ports:
          - containerPort: 21614
            name: tokenchain-1
          - containerPort: 20614
            name: tokenchain-2-1
          - containerPort: 20615
            name: tokenchain-2-2
          - containerPort: 20616
            name: tokenchain-2-3
          - containerPort: 20617
            name: tokenchain-2-4
          - containerPort: 20618
            name: tokenchain-2-5
          - containerPort: 21616
            name: tokenchain-3
        - name: sidechaineth
          imagePullPolicy: IfNotPresent
          image: cyberrepublic/elastos-sidechain-eth-node:latest
          ports:
          - containerPort: 20635
            name: ethchain-1
          - containerPort: 20638
            name: ethchain-2
          - containerPort: 21634
            name: ethchain-3
          - containerPort: 21636
            name: ethchain-4
            

The redis:alpine, postgres:alpine and postgrest/postgrest images are publicly available on docker registry together with the 4 Elastos blockchain images (available initially via the Elastos Tools & Environment link (see below), or directly by copying the details here to your yaml – after docker pull-ing the images and caching them in minikube).

You make the images you require available on your private repo by ‘docker pull’ing its image. You then issue ‘minikube cache add (repo_name/)image-name:tag’ then ‘minikube cache reload’. You must wait until Minikube is running. See below.

You actually need to repeat this for each image you use to make it available locally and reduce network usage, however public repo images such as cyberrepublic’s Elastos, and the redis & postgres server images should not be altered when committing so their public repo image names are retained. Any “latest” image tags should be re-tagged in docker first [docker tag (repo_name/)image-name:latest (repo_name/)image-name:your-tag], with the caution that it appears the Elastos system will not work unless you cache the “sidechain-eth” image as “latest”.

It is also necessary to have your own valid database image built: with a primary key defined on the database for each table, and no composite primary keys, with the exception that all join tables require exactly one pair of composite primary keys from the 2 tables being joined (and no other fields). You define every column in every table so they all may work cooperatively. Hardly trivial!

You would also need to ‘associate’ different pairs of tables by creating foreign keys (in source tables) pointing to the primary keys of the target tables.

You should be familiar with the processes of ‘Database Normalisation’ to ensure your databases can do what you intend. At first glance, Database Normalisation can appear more complex than necessary for the relatively straightforward needs of a typical Enterprise Database. However a coverage of 1st Normal Form, Second Normal Form and Third level normalisation is recommended.

Note that you need to create a Docker account to obtain access to the docker image system and to have your own repo’s.

It would also be recommended to use the google search engine liberally, and to remember to copy and paste error messages, as well as typing well-phrased questions, directly into the engine (ie. the google search website – eg www.google.com.au).

Activity & Command Sequence:

Install Docker and, for flexibility in development, consider installing Docker-Compose (not necessary here). Install minikube and kubectl. You require 32GB RAM with 250GB SSD. A second Hard Disk on board helps to keep your files safe in times of disaster. You can perform all work from a second ‘dead’ disk safely, after mounting the disk. Your repos will be lost if your Main disk falls over for whatever reason, so remember to at least keep .sql backups of your database on the second disk. These backups become the source for restoration of the database structure into your stateful set (into each pod individually, after you have developed the database in pgadmin4 and docker). In fact, it pays to keep all your working files on the HDD for safety and security.

You need to create the postgres (with login, password and superuser), web_anon (cannot log in) and client_registry_user-x (with login and password) roles correctly, although the database will be owned on Kubernetes by root initially, with no other roles present at first. Study the Postgresql documentation, we all had to .. and realise that Alpine Linux is a minimal environment and is a little different to Ubuntu. eg the command to add the nano package is ‘apk add nano’. Note: it is simpler, and often avoids problems, if you set up your host computer as the user ‘postgres’, running minikube as that user. If you were to purchase a twin Disk Extreme Gaming Desktop Computer (Laptops also possible), with 32GB RAM and one 250GB SSD + an HDD, you would be happy. You need to turn swap memory off, and automount the HDD:

 (on Ubuntu)
 1.
sudo swapoff -a
 
 (- skip to step 3 if second disk is already mounted)

 2. Look up the device representing the second disk and find the "uuid", and add a line to your /etc/fstab as follows:

fdisk -l :lists your disk drives. Note drive label.

sudo blkid : lists uuid's of each disk drive and block storage device - locate the relevant uuid.

sudo mkdir -p /mnt/sdb1  (for example)

 Edit the fstab file

sudo nano /etc/fstab

 add the following line at the bottom of the file:
UUID=your-UUID /mnt/sdb1    auto nosuid,nodev,nofail,x-gvfs-show 0 0

 Save and exit editor. Then:

sudo mount -a  

 : and if, and only if, there are no errors
 reported,
 you are set to go.
 Otherwise you need to do it until you get it right, 
 because your boot process may suffer.

 3.

 get kubectl

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

 make it executable

chmod +x ./kubectl

sudo mv ./kubectl /usr/local/bin/kubectl

 check it

kubectl version --client

 install minikube

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \

chmod +x minikube

sudo mkdir -p /usr/local/bin/

sudo install minikube /usr/local/bin/

minikube start

 if error - fix error (eg sudoer usage) and/or 

minikube stop
minikube delete
minikube start

 (until minikube starts with no errors. You may simply 
 repeat 'minikube start' continuously, in fact after any 
 successful 'minikube delete')

kubectl create secret docker-registry your-docker-auth-secret-key-name –docker-username=your-docker-username –docker-password=your-docker-password -docker-email=your-email

 (You will also need to have set up a kustomization.yaml - 

nano kustomization.yaml

 for your postgres and haskell secrets and issue

kubectl apply -k .

 on your own host computer, in the directory containing 
 the file. Google 'kustomization.yaml'.)

minikube ssh
 to get into minikube host and create Volumes.

sudo mkdir -p /mnt/data/postgresql
sudo mkdir -p /mnt/data/postgresql-file
sudo mkdir -p /mnt/data/redis-oseer
sudo mkdir -p /mnt/data/redis-iot-oseer
sudo mkdir -p /mnt/data/redis-iot-0
sudo mkdir -p /mnt/data/redis-iot-1
sudo mkdir -p /mnt/data/redis-iot-2
sudo mkdir -p /mnt/data/redis-iot-3
sudo mkdir -p /mnt/data/red-mclass-0
sudo mkdir -p /mnt/data/red-mclass-1
sudo mkdir -p /mnt/data/red-mclass-2
sudo mkdir -p /mnt/data/red-mclass-3
exit

 Copy necessary file to minikube host:
 (NOTE: Your docker IPAddr may differ from this.)
scp -i ~/.minikube/machines/minikube/id_rsa -r path/to/your-db-backup.sql docker@172.17.0.3:/home/docker

minikube ssh
 to get into minikube host and copy files to host volume 
 directories.
cd /home/docker
sudo cp your-db-backup.sql /mnt/data/postgresql-file/
exit

 Assuming you have done:

sudo usermod -aG docker $USER
sudo newgrp docker

 Test your status as ordinary user and see "Hello World" if correct.

docker run hello-world

 Repeat above if error.

docker pull postgres:alpine
docker pull redis:alpine
docker pull postgrest/postgrest

 and

docker tag postgrest/postgrest:latest postgrest/postgrest:your-tag
 
 and

docker pull cyberrepublic/elastos-mainchain-node:v0.3.7
docker pull cyberrepublic/elastos-sidechain-did-node:v0.1.2
docker pull cyberrepublic/elastos-sidechain-token-node:v0.1.2
docker pull cyberrepublic/elastos-sidechain-eth-node:latest

 Next, 

minikube cache add postres:alpine
...
...

 and thus for each image you just pulled..., then

minikube cache reload
minikube cache list

 and you should see your images.

kubectl apply -f path/to/assemblage.yaml
 - Note that you must edit the above assemblage.yaml to alter the
 database backup file name, the database Stateful Set and Port names 
 (don't forget to edit the Service spec Port name for the database 
 at the same time), and the 3 key names wherever they occur (docker 
 key, postgres and haskell keys) to your own values.

 Examine results of deployment:

minikube dashboard
 (Ctrl-C to close. Also remember to "minikube stop" followed by "minikube delete" if you need to start again.)

 When lights are all green (except the Services which are non system
 or headless, will be down): (New terminal)
 
minikube tunnel

 to stand up IP Address Services so that you can access the pods
 from your home/office/school network and the Elastos Trinity/Ionic
 DApps. You would then need to set up the database roles in the
 containers, restore the databases in each container, then edit
 pg_hba.conf to specifically allow access from the redis replica set 
 and the haskell webserver replica set. 

 After all items on the dashboard are running and showing "green",
 you enter the primary your-database-0 container (in a new terminal)

kubectl exec -it your-database-0 -- sh

 - you will enter as 'root', and you issue 

psql your-db-name 

 and create the postgres role (as superuser with login and password
 matching your database's),  web_anon (cannot log in) and a
 client_registry_oseer role (with login and password) you require  for
 your database. 
 There needs to be 9 other client-users to cover the other schemae
 (including iot-class-oseer, iot-class-0-user through -3-user and memclass-0-user through memclass-3-user ). 
 Then edit (get "nano" with "apk add nano" until it works!) the pg_hba.conf in your
 
/var/lib/postgresql/data

 directory so as to grant host-based access to the
 webserver (haskell) and all redis deployments. 

 Repeat this procedure in your-database-1 container, with the
 exception that the system file pg_hba.conf will already have been
 edited as a replicum. Make sure that you at least commence with 'create role postgres with superuser;'. If the database at your-database-1 protests
 that some
 items already exist when you attempt to create other roles and, later, 
 restore that database container, you may simply delete the instance
 in the dashboard, and it will be recreated, having already copied 
 the new your-database-0 database details. Simple! Exit your-database-1 after
 completing, and re-enter your-database-0 then restore your database
 from sql backup with 

psql your-db-name < data/your-db-backup.sql

 and remember you may delete the your-database-1 container
  (either using kubectl, or in dashboard)
 in order to obtain a shiny new replicum of your-database-0. Note:
 On the
 page here, "<" should be replaced by the actual "less than"
 character.
  Exit your-database-0, enter your-database-1,
 and restore the database in that container. See above note in case
 of error - ie simply delete container and wait for kubernetes to
 recreate a replicum of your-database-0 container, already set up.

 You will have to configure and program the Redis servers
 independently within their running containers.  Happy researching.

 You can see details of the deployment pods (Replica Sets) and
 database pods (Stateful Set) with:

kubectl get pods
 and 
kubectl describe pods

IoT development, in Elastos, occurs currently in Python, and uses a system called Hyperconnect. Hyperconnect requires a running Elastos Carrier Node on the host (which you compile from source), and the java command to finally run Hyperconnect must be issued as root (sudo). Hyperconnect provides Technical Administration and Control Panels, however due to the current state of development of Hyperconnect, it appears the system is really only intended for a system no more complex than a home IoT system, with very simple and limited gui-only reporting. As such ITCSA has turned towards an IoT system based on traditional Centralised Server Architecture.

In your Ionic DApp (not Hyperconnect), in the assets folder (in both react and angular versions) after project intialisation and compilation, you will see a file called manifest.json. You need to enable connections to your database installation there. See the following for a sample manifest.json. Note in the real set-up the manifest should exclude the redis entries which do not apply to that particular DApp – ie only one redis entry per DApp/Schema. This enables routing of users, to the correct redis server for their schema, by the manifest.json of the user’s DApp, as the redis group of servers has no more convenient way of discerning users. Users’ schemae are distinguished on the webserver/database by the user’s ‘search_path’ in postgres (see above). This also means defining the search_path (schema name) for every new user (at registration time), so that the postgres server may route each request to the intended schema. This is also why a user may only access one schema, ‘public’ if an unspecified search_path, or the schema specified in search_path.

Currently setting up DApp login on Elastos appears error-prone. You can develop in either Angular or React (both Ionic) but neither has a functioning model published for Elastos logins. However we are looking into Tuum Technologies as an existing trusted provider of DID DApp login templates. Registrations/Logins on Elastos could then occur inside the DApp linked behind the scenes to any of the available ways for obtaining and verifying an Elastos DID. You may continue developing a front end with no login or even with a ‘Mickey Mouse’ login. ‘Claytons’ also produce a very good Login!!

{
  "id": "au.com.acompany.ourdapp",
  "version": "0.0.1",
  "version_code": 1,
  "name": "Our DApp",
  "short_name": "Our DApp",
  "short_description": "A Result of People's Open-Sourced Dreams!",
  "description": "Fully Networked Inter-Enterprise Trading and Supply Chain DApp for your Company Networks: Equally, a model for any General Networked Inter-Enterprise System",
  "start_url": "index.html",
  "type": "app",
  "start_visible": "hide",
  "category": "finance",
  "icons": [
    {
      "src": "assets/images/logo.png",
      "sizes": "512x512",
      "type": "image/png"
    },
    {
      "src": "assets/images/logo.png",
      "sizes": "128x128",
      "type": "image/png"
    }
  ],
  "author": {
    "name": "Joe E. Citizen",
    "email": "joe@acompany.com.au",
    "website": "www.acompany.com.au"
  },
  "default_locale": "en",
  "plugins": [
    "Device",
    "NetworkStatus",
    "File"
  ],
  "urls": [
    "haskell:webserve-https/*",
    "redis-oseer:redis-oseer/*",
    "red-mclass-0:red-mclass-0/*",
    "red-mclass-1:red-mclass-1/*",
    "red-mclass-2:red-mclass-2/*",
    "red-mclass-3:red-mclass-3/*",
    "redis-iot-oseer:redis-iot-oseer/*",
    "redis-iot-0:redis-iot-0/*",
    "redis-iot-1:redis-iot-1/*",
    "redis-iot-2:redis-iot-2/*",
    "redis-iot-3:redis-iot-3/*",
    "elastos-blockchains:mainchain-1/*",
    "elastos-blockchains:mainchain-2-1/*",
    "elastos-blockchains:mainchain-2-2/*",
    "elastos-blockchains:mainchain-2-3/*",
    "elastos-blockchains:mainchain-2-4/*",
    "elastos-blockchains:mainchain-2-5/*",
    "elastos-blockchains:mainchain-2-6/*",
    "elastos-blockchains:mainchain-2-7/*",
    "elastos-blockchains:mainchain-3/*",
    "elastos-blockchains:didchain-1/*",
    "elastos-blockchains:didchain-2-1/*",
    "elastos-blockchains:didchain-2-2/*",
    "elastos-blockchains:didchain-2-3/*",
    "elastos-blockchains:didchain-2-4/*",
    "elastos-blockchains:didchain-2-5/*",
    "elastos-blockchains:didchain-3/*",
    "elastos-blockchains:tokenchain-1/*",
    "elastos-blockchains:tokenchain-2-1/*",
    "elastos-blockchains:tokenchain-2-2/*",
    "elastos-blockchains:tokenchain-2-3/*",
    "elastos-blockchains:tokenchain-2-4/*",
    "elastos-blockchains:tokenchain-2-5/*",
    "elastos-blockchains:tokenchain-3/*",
    "elastos-blockchains:ethchain-1/*",
    "elastos-blockchains:ethchain-2/*",
    "elastos-blockchains:ethchain-3/*",
    "elastos-blockchains:ethchain-4/*"
    ],
  "background_color": "#4e8ef7",
  "theme_display": "show",
  "theme_color": "#4e8ef7"
}

You will have to add every website and webservice your customers need to be connected with (ie ‘whitelist’) here – eg. banks – in manifest.json. No other websites (‘blacklisted’) will be accessible to your DApps by any means.

[sc name=”BlockchainLimitationsToggler”]

You can copy a basic set of Enterprise financial and accounting tables and functions from a base schema to every schema you need, by having them in the default ‘public’ schema in postgres, dumping that schema and editing the dump to search and replace “public” with the new schema name. You then restore the database (using psql as usual) and the new schema will be added, alongside the unaltered public schema. These new schema each need to have their journal/ledger, and other, fields refactored to suit the Member Class (or IoT DApp) concerned. Note that an IoT report of an event becomes a transaction, at the cloud level, when the incoming centralised data is forwarded to a blockchain transaction and then into the main system, although involving no consideration other than data and trust. One would probably use a multi-party, multi-ledger system to record IoT transactions corresponding to the stakeholders in an event.

You could also consider structuring the public schema to handle any erroneous requests to that schema (none is expected).

Redis will help speed transactions, if set up and configured well. This requires one replicated instance of Redis per Member Class Schema (here, requiring 4 replicated Redis Pods) plus as many again for the IoT networks, and one further for the Oseer Set. The method used to differentiate between the different schema at runtime involves the use of postgres ‘search paths’, associated with the different users’ home schema. A user must be unique on the entire Redis/Postgres system, only able to access one Redis server. Create other users for other schema if necessary. It’s probably not a good idea to connect to your redis system as root or postgres, unless configuring and developing. We are taking bets on whether you would end up in the default public schema for every root or postgres connection from a front end gui, with this redis system operating. The method used to route user requests to the correct Redis server for their Schema, is described at the bottom of this page, before the sample manifest.json file. There would need to be a considerable effort spent on programming the Redis key-value cache datastores, but this is beyond the scope of this article. For a high-load environment, the effort will be worth the returns in performance improvements.

Note: For a more unitised set-up, the IoT functions might be separated from the Non-IoT functions, to create a second IoT-only node with its own postgres database, alongside a simplified main node. You would need to develop using kubectl, kubeadm, and kubelet (or microk8s with multipass) and not Minikube. Here are the schematics for the 2 nodes shown as separate installations. One would only require a single Master Node and a single Cluster.

 

There are N+1 independent General Networks, labeled 0,1,..,n,n+1,..,N. The nth network has M(n)+1 member-classes labeled (n,0),(n,1),..,(n,m),(n,m+1),..,(n,M(n)). There are 3 “extra” member-classes/schema covering the non-inter-networked, Real Estate Property-Based DApps/Schema (a-CHEIRRS,b-ChubbaMorris & c-convey-IT). There are f(i) (=F(i)/2 – see fig.) future non-internetworked Schema allowed for (ie one overseer and one main schema each). The Redis servers for these overseers and mains are hidden. There would be j(i) members in each of these single member-class DApps, possibly with distinct (tailored) DApps. Within the CHEIRRS member-class, and possibly within future DApps, each member has their own tailored DApp despite the existence of only the single Schema per member-class globally. In real production and development, each network occupies its own node-pair (non-iot and iot). In addition the Head Overseer system is on the master node. The labeling and numbering systems here and below represent the view of operations of the Head Overseer.

 

There are N+2 independent General IoT Networks, labeled 0,1,..,n,n+1,..,N and A. The nth network (corresponding to the Non-IoT nth network) has M(n)+1 member-classes labeled (n,0),(n,1),..,(n,m),(n,m+1),..,(n,M(n)), as for the Non-IoT node. Network A belongs to the CHEIRRS Schema and DApps, and foresees a need for IoT device networking, recording and control in Social and Affordable Housing. This network has 0 -> M(A) members (ie M(A)+1 “sub”-classes, since there is only one Schema A, and only a single member-class A, but M(A)+1 DApps). The members are labeled (A,0), (A,1), ..,(A,m), (A,m+1), ..,(A,M(A)). We have allowed for g (= Σ|n(G(n))/2 – see fig.) future iot necessary network pairs for G future non-internetworked Systems including overseers, whose DApp numbers depend on the numbers of distinct member DApps (l) served by each network (k) in these single-member-class Networks. As above, there would actually be a separate “iot” node for each network (where required).
With the memclass-x-dapps, the development occurs on the host, following the Elastos Developer Documentation and Elastos Development Tools and Environment. You will need node/npm (and possibly yarn, for dependency debugging, as well – DO NOT do ‘sudo apt install yarn’ – wrong version results! Download the latest version instead.), ionic and the trinity-cli from Elastos (and an understanding of Ionic development).

You should understand that the devices running the mem-class-x and iot-class-x DApps also run the blockchains themselves. Although other Distributed and Centralised BlockChain Technologies exist (eg Hyperledger), with Elastos, it is the security advantage that attracts us. Please refer to BlockChains.

A further and easier approach (once you’ve understood the Juju/Charm technology) to setting up a Kubernetes Back-End, can be found at (to start) Juju, Charms and Kubernetes. Here is a figurative overview of our current capacities:

itotcca capacity

We have a github site including the path to follow to develop this way. CHEIRRS.

The following image is a representation of the recent state of our platform. The above “cheirrs” repository reflects this layout in code.

kubeeverything

Final advice: Download Microsoft’s Visual Studio Code Editor for Ubuntu/Debian. Cheirrs!