On Computer Network Communication
There were Networks before the World Wide Web! (Please refer to History Of Computer Communication Networks (wiki).)
The following link is also a wikipedia article. History Of The Internet. (wiki)
I should have just called it “web”.
Packets of Hypertext
transmitted across a web
ordered by the TCP protocol
with destinations mapped onto
the Domain Name System.
Tim Berners-Lee,
:- inventor of the World Wide Web -:
“HyperText is a way to link and access
information of various kinds as a web
of nodes in which the user can browse at
will. It provides a single user-interface
to large classes of information (reports,
notes, data-bases, computer documentation
and on-line help). We propose a simple
scheme incorporating servers already
available at CERN… A program which
provides access to the hypertext world
we call a browser… “
Tim Berners-Lee , R. Cailliau.
12 November 1990, CERN
“I just had to take the hypertext idea
and connect it to the
and domain name system ideas
and—ta-da!—the World Wide Web” [33]

I should have just called it “web”.
Packets of Hypertext
transmitted across a web
ordered by the TCP protocol
with destinations mapped onto
the Domain Name System.
Tim Berners-Lee,
:- inventor of the World Wide Web -:
“HyperText is a way to link and access
information of various kinds as a web
of nodes in which the user can browse at
will. It provides a single user-interface
to large classes of information (reports,
notes, data-bases, computer documentation
and on-line help). We propose a simple
scheme incorporating servers already
available at CERN… A program which
provides access to the hypertext world
we call a browser… “
Tim Berners-Lee , R. Cailliau.
12 November 1990, CERN
“I just had to take the hypertext idea
and connect it to the
and domain name system ideas
and—ta-da!—the World Wide Web” [33]
The history of communication on Earth is extensive. Every living thing has means of communication. Trees, insects, molluscs, echinoderms, flowers, the entirety of the plant and animal kingdoms all communicating between and within individuals and communities of organisms, as they, and the cells which compose them, organise to feed, reproduce and defend themselves.
While some communication in nature involves transfer of matter, some involves heat transfer, and some optical and auditory events, there is touch, a lot of communication is chemically based (genetic codes; hormonal codes; pheromone codes) and electro-chemically based (brain, nerves, sensory organs and muscle). Humans distinguish themselves by having developed telecommunication systems, from simple messages in the form of smoke signals, ambulatory verbal messages and message sticks, semaphores and written messages, to modern electronic systems such as telegraph, telephone, radio, radar, sonar, television, facsimile, and now, the internet.
Our capacity to utilise electronic networks for communication commences in Europe, at Kӧnigsberg, in the 18th century.
There is much barely-penetrable mathematical network theory, based around the foundations of network analysis as embodied in the Seven Bridges of Kӧnigsberg problem, solved by Leonhard Euler in 1735 (Euler Circuits and Walks).
Euler Paths and Cycles are concerned with crossing every edge in a “graph” exactly once without repeating. The vertices may be crossed more than once (thus modeling the problem of crossing every bridge in Kӧnigsberg exactly once on a single ‘walk’).
By contrast, a Hamilton Path is concerned with crossing every vertex in a “graph” exactly once without repeating. The edges may be crossed more than once. (in this link both Euler and Hamilton approaches to graphs are explained).
In an Euler Cycle, the Path ends where the Path began. In a Hamilton Cycle, the Path, likewise, ends where it began, such that the initial and final vertices are identical (the only allowable repeated vertices in a Hamilton Cycle or Path)
Notice that in Euler Paths we are looking at crossing (following) edges. In Hamilton Paths it’s vertices we are looking at.
![]() |
![]() |
The so-called Hamilton Cycles are clearly inversely related to the Bridges of Kӧnigsberg problem in network theory (see how the edge based approach would suit an analysis of paths for a postal network where each street must be visited once, whereas a vertex based approach suits a traveling salesman problem, where only individual addresses need to be visited).
Although often attributed to William Rowan Hamilton, Hamiltonian cycles in polyhedra had been studied a year earlier by Thomas Kirkman, who, in particular, gave an example of a polyhedron without Hamiltonian cycles.[1] Even earlier, Hamiltonian cycles and paths in the knight’s graph of the chessboard, the knight’s tour, had been studied in the 9th century in Indian mathematics by Rudrata, and around the same time in Islamic mathematics by al-Adli ar-Rumi [fr]. In 18th century Europe, knight’s tours were published by Abraham de Moivre and Leonhard Euler.[2] (See Hamilton Cycles and Paths)
So, since all human and computer-based communication needs to involve the sharing of coded messages between many individuals and devices, that might be connected in a polyhedron-shaped electrical network surrounding the Earth, modern computerised networks have had to evolve from simple point-to-point connections which ran “bitstreams” between 2 points, into current “packet-switching” networks which convey packets of signals, in coded forms, according to a protocol (TCP/IP) which is expected or “understood” by all nodes (or “vertices” if mapped as a graph) on the polyhedron, through which the packet passes as it undertakes its own journey to find its own destination, and such that the packet can be assembled in the correct order (with other asynchronously arriving packets in the full transmission) and the contents of the transmission can be decoded (HTML/JavaScript/CSS) and “read”, displayed or understood at the receiving end (identified as an Address on the Domain Name System (Map)).
It is obviously a far cry from an analysis of “walks” (Euler Walks and Cycles), in a city divided by a river, with 7 bridges, and with two islands in the river (Kӧnigsberg), to the converse properties of graphs (Hamilton Paths and Cycles), to end with a worldwide web of optoelectronic, electronic and wireless “Packet-Switching” networks, interlinked to successfully support the modern “hypertext”-based html browsers, and so much more.
A history of “Computers as Machines“ needs to be considered alongside the history of Electrical and Electronic Networks themselves.
![]() |
![]() |
![]() |
![]() |
VINTON CERF, ROBERT KAHN, AND TCP/IP
“In the early 1970s, packet-switched networks began to proliferate, with the
ARPAnet—the precursor of the Internet—being just one of many networks. Each of
these networks had its own protocol. Two researchers, Vinton Cerf and Robert Kahn,
recognized the importance of interconnecting these networks and invented a cross-
network protocol called TCP/IP, which stands for Transmission Control Protocol/
Internet Protocol. Although Cerf and Kahn began by seeing the protocol as a single
entity, it was later split into its two parts, TCP and IP, which operated separately.
Cerf and Kahn published a paper on TCP/IP in May 1974 in IEEE Transactions on
Communications Technology [Cerf 1974].
The TCP/IP protocol, which is the bread and butter of today’s Internet, was
devised before PCs, workstations, smartphones, and tablets, before the prolifera-
tion of Ethernet, cable, and DSL, WiFi, and other access network technologies, and
before the Web, social media, and streaming video. Cerf and Kahn saw the need
for a networking protocol that, on the one hand, provides broad support for yet-to-
be-defined applications and, on the other hand, allows arbitrary hosts and link-layer
protocols to interoperate.
In 2004, Cerf and Kahn received the ACM’s Turing Award, considered the
“Nobel Prize of Computing” for “pioneering work on internetworking, including the
design and implementation of the Internet’s basic communications protocols, TCP/IP,
and for inspired leadership in networking.” from Computer Networking – A Top Down Approach; Kurose & Ross. 2021
Vint Cerf (left) (By Вени Марковски Veni Markovski – Own work, CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=11513137)
Bob Kahn (right) (By Вени Марковски | Veni Markovski – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=26207416)
However it should be known that in the nature of Internet Operations, unencrypted data packets, in their journeys, may be “sniffed” by sufficiently interested hackers, operators or authorities, and that modern JavaScript allows the collection of many types of network-behavioural and other information about people and devices in the form of “cookies”. Cookies themselves are very powerful. The cookies we implicitly accept into our devices when we connect to a webserver, communicate information to other private servers. Despite the use of secure protocols such as the Secure Socket Layer (SSL a.k.a. “https://”), and a multitude of other practices to defend us, such as Anti-Virus and Anti-Malware Software, VPN’s and firewalls and “tripwires”, the modern World Wide Web remains a very wild and dangerous place, brimming with schemes, strategies and tactics to take advantage of unsuspecting users. For example, “Ip-Spoofing” uses the fact that the source address in the Header section of any data packet may be altered, to mount Denial of Service Attacks as well as allowing Device Masquerading. At a higher systems level on the internet “BGP hijacking“ (sometimes referred to as prefix hijacking, route hijacking or IP hijacking) is the illegitimate takeover of groups of IP addresses by corrupting Internet routing tables maintained using the Border Gateway Protocol (BGP).[1][2][3][4][5] which enables masses of information flows to be misdirected to criminal-operated machines .. however, see “Security“ for an explanation of the method (entirely open-sourced) Rong Chen has used to replace the Domain Name System with his own secured node numbering system for intercommunication, backed by a blockchain for user and micro-component identity generation, storage and process recording.
Actually the originators of the system to which I am referring, Rong Chen’s Elastos Carrier 2, published their design at New York University in 2002. Their names are Petar Maymounkov and David Mazieres, and the system is a Distributed Hash Table network using their Kademlia Protocol. This system does not need or use the Domain Name System. Carrier 2 uses the Kademlia Protocol.
Rong’s system enables secure defence from external and internal threats, while the transactions on the blockchain (in personal or commercial systems) are kept honest automatically by all participants. From another perspective, Rong has devoted the Elastos Project to ensuring that users can be guaranteed of owning their data, which is far from the case in the current cloud-centralised Web2 era, (ie client devices connecting to server computers in the cloud), with companies from Amazon and Google to smaller companies, utilising methods based on JavaScript code to appropriate consumer data in the interests of Marketing efforts, and other means of making corporate profit, as well as more nefarious players using similar methods to achieve questionable goals, with data that should be owned and controlled by consumers (since we generate the data). You could refer to our “Mobile First Development“ page or to “The Bucordo Project“ or the “BlockChains“ page for details of how ‘Trust’ and Data Ownership is guaranteed in our systems via Elastos and the BitCoin “Miners”, and some home-grown ingenuity.
Secure Client Connections
with BFT-SMaRt/DHT Integration:
A Banking Example
To take the example of Banking and the case where a customer needs
to visit the website of a Bank and log in
securely and perform actions on their accounts:
In normal usage, the Bank website (before logging in) can only be found
via a DNS lookup. If you were being followed by a hacker,
in order to avoid sending insecure information online at login,
you must have already visited your bank’s local branch and asked them
to use their own secure internal system to generate
a Customer Identifier (usually a number)
and possibly later to post you your password/passnumber.
This allows you to login securely at the site found via DNS.
Logging in securely assumes a TLS connection (https) and the incumbent encryption
for the online message transport. All the foregoing is necessary
to connect and perform banking on your own behalf.
An added layer of security could be created by switching to
a DHT network, operated by the Bank, for future exchanges
— using that network for business.
Essentially, the system would rely on a (maintained and updatable)
set of bootstrapping Node IDs being transmitted
over TLS on the first connection occasion. These Nodes would,
on the second and subsequent connection occasions,
engender a working connection between the client and the Bank over a DHT.
with BFT-SMaRt/DHT Integration:
A Banking Example
to visit the website of a Bank and log in
securely and perform actions on their accounts:
via a DNS lookup. If you were being followed by a hacker,
in order to avoid sending insecure information online at login,
you must have already visited your bank’s local branch and asked them
to use their own secure internal system to generate
a Customer Identifier (usually a number)
and possibly later to post you your password/passnumber.
Logging in securely assumes a TLS connection (https) and the incumbent encryption
for the online message transport. All the foregoing is necessary
to connect and perform banking on your own behalf.
a DHT network, operated by the Bank, for future exchanges
— using that network for business.
set of bootstrapping Node IDs being transmitted
over TLS on the first connection occasion. These Nodes would,
on the second and subsequent connection occasions,
engender a working connection between the client and the Bank over a DHT.
Abstract
Traditional online banking systems rely heavily on DNS and TLS
to establish secure connections. However, these systems remain vulnerable
during the initial connection, especially due to DNS-based attacks.
We propose a system where the initial trust bootstrapping
occurs via traditional TLS, but all subsequent interactions are routed via a secure,
bank-operated Distributed Hash Table (DHT) overlay network.
This enhances privacy, decentralizes routing, and ensures forensic traceability.
Problem Statement
When users attempt to connect to their bank’s website,
their browser typically performs a DNS lookup.
This step is exposed to eavesdropping and manipulation, even if the actual login is conducted over TLS.
While TLS protects the data in transit, it assumes that the DNS response was trustworthy.
Furthermore, phishing sites and certificate mismanagement can still exploit this weakness.
Proposed Architecture
Initial Bootstrapping:
The user initiates a connection via a known HTTPS endpoint.
The bank transmits a signed list of DHT Node IDs (e.g., Kademlia or libp2p nodes) over TLS.
Trust Anchoring:
These nodes act as verifiable entry points into the DHT network.
The nodes periodically rotate but are always updated via authenticated TLS sessions.
DHT-Based Operation:
All sensitive or persistent operations post-login (account access, transaction requests)
are routed via the authenticated DHT overlay.
Communications use QUIC or libp2p streams with identity verification through
cryptographic rehashing of bank-issued IDs.
Security and Traceability:
Transaction metadata is stored in immutable records.
BFT-SMaRt or similar consensus ensures Byzantine-resilient replication.
Executive Summary
Problem: Online banking currently depends on public DNS,
which creates exposure during the initial lookup phase, even when TLS is used.
Solution: Introduce a DHT-based overlay network post-TLS-authentication,
allowing customers to communicate securely with the bank without relying on DNS.
How it Works:
First visit: Secure TLS session used to receive a list of trusted DHT Node IDs.
Future visits: Communication happens via this DHT, authenticated by cryptographic ID mappings.
Enhances resistance to DNS spoofing, phishing, and eavesdropping.
Benefits:
- Privacy: Removes reliance on DNS.
- Security: DHT overlay is fully authenticated.
- Forensics: Every action can be tied to an immutable record with
timestamped, verifiable metadata.
Use Cases:
- Online banking portals.
- Secure government communications.
- Enterprise intranet or VPN access.
System Design Spec (Simplified)
Components
- Client: Web or mobile banking application.
- Bootstrapping Server: HTTPS endpoint to distribute DHT Node IDs.
- Bank DHT Overlay: Secure libp2p-based routing infrastructure.
- Supernodes: Authority nodes that manage anchoring, timing, and replication.
- Consensus Layer: BFT-SMaRt cluster for trace and record validation.
Flow Diagram
Client -> HTTPS -> Bootstrapping Server
Bootstrapping Server -> [Signed Node ID List]
Client -> Connect to libp2p Network using Node IDs
Client <-> Bank via Secure DHT
Supernode -> Assigns timestamp, blockHeight, slabId
Transaction -> Stored in PostGIS
Merkle root anchored on Ethereum/Elastos
Trust Model
First connection: Trust bootstrapped using TLS + manual certificate pinning.
Subsequent sessions: Fully DHT-based with cryptographically validated Ids.
Revocation: Periodic expiration of Node IDs + force-refresh via TLS
Limitations & Risk Considerations
Beware, however, for we should not assume that the advantages of an
“Absence of DNS” system, such as we have in Kademlia and other Distributed Hash Table networks,
demonstrate a panacea to the problems carried by DNS.
Outstanding Issues Even in a DHT-Based System
Initial Trust Anchor Must Be Protected
The very first TLS handshake is still reliant on a secure certificate infrastructure.
If the device or cert store is compromised, bootstrapped node data can be spoofed.
Sybil Attacks and Routing Vulnerability
Without strict node issuance, an attacker could introduce rogue nodes into the DHT.
Controlled overlays mitigate this but require monitoring and rotation enforcement.
Content Poisoning and Misrouting
Cached or stale data can be served unless integrity is verified with signatures.
Versioning and hash validation of metadata is essential.
Node Churn and Reconnection Logic
Clients returning after a long absence may fail to find live bootstrap peers.
Support out-of-band refresh paths or anchor revocation logic.
Latency and Reliability Trade-offs
DHT lookups can be slower than centralized systems.
Add caching, peer clustering, and redundancy where needed.
Legal and Auditing Concerns
Non-deterministic routing paths can complicate compliance, especially for financial data.
Keep auditable trails via anchored transaction metadata and signed routing logs.
Client-Side Security Still a Weak Point
No DHT protects from compromised endpoints.
Employ transaction signing and secure enclaves where possible.
Strategic Outlook:
Federation of Bank-Operated Supernodes
We envision a federated DHT network in which each participating bank
operates its own supernode, managing routing and secure operations
for all its branches. Secondary nodes represent bank branches, divisions,
or internal departments, each connected to their parent bank’s supernode.
Key Advantages:
- Interbank Connectivity: Supernodes from multiple banks form a peer-level overlay for transaction coordination.
- Client Routing: Customers interact with their bank via a secure, pre-authenticated DHT session.
- No Central Governance: Avoids political entanglement by decentralizing control among supernode participants.
## Extended Proposal:
Federated Interbank Supernode Clusters
We propose extending the architecture to support multi-bank cooperation.
Each institution operates its own DHT-based network, anchored by a Supernode
and a set of secondary nodes (branches). These institutional networks
are federated via a secure Supernode overlay to support real-time
interbank interactions and settlements.
### Key Properties
- **Client ↔ Bank**: Encrypted DHT communication bootstrapped from TLS.
- **Branch ↔ Supernode**: High-availability local routing with full traceability.
- **Supernode ↔ Supernode**: Federated interbank DHT coordination layer.
This design ensures:
- Full forensic traceability of client operations.
- Resilience against DNS attacks and central points of failure.
- Cross-bank transaction integrity through BFT-SMaRt consensus.
Future upgrades can support:
- Regional clearinghouses
- CBDC orchestration
- Shared anchoring to public or consortium chains
Visual Architecture:
This setup allows for:
- Secure client-to-bank DHT routing
- Secure inter-branch communication within a bank
- Secure interbank coordination over a supernode mesh
- Fully decentralized governance while maintaining compliance and auditability
![]() |
![]() |
![]() |
Petar Maymounkov |
Rong Chen |
David Mazieres |
Rong Chen’s Elastos Smart Web constitutes a safe world-wide “network operating system” of which devices (with their own operating systems and hardware components) are merely a part.
Rong informs me that the phrase “The Network is the Computer” was coined in the mid 1990s by the CEO at Sun Microsystems.
(Obviously owners of devices retain all the usual capacitites and software, but would largely restrict “work” to the BFT-SMaRt/Elastos System.)
However due to incidents beyond anyone’s control, the development of Carrier 2 has stopped and the documentation left incomplete, therefore we have been forced to pivot to the use of “jvm-libp2p” – which is fully operational and well documented. It is, like Carrier, a Kademlia DHT-based system. We are in the process of merging our own code with jvm-libp2p code.
|
IT/OT CHAIN & CLOUD
|
![]() |
![]() |
Honor |
|
Ut Filiae |
![]() |
ADNS – Absence of DNS – Enterprise System
ChubbyChecker:
Elastos ELA/ESC Blockchain
Blockchain vs Database:
Factory 0:
Factory 1:
Factory 2:
|
![]() |
![]() Elastos Blockchain |
________________________________________________________________________
A Twin-Networked Enterprise Application

________________________________________________________________________
On Electicity

Some of the key people (Morse, Bell, Maxwell, Hertz, Marconi, Wiener and Shannon) involved in the development of electical communications are mentioned here: Electrical Communication Networks. For those who followed the Maxwell link, it may be interesting to know that when you involve an understanding of Einstein’s theory of Special Relativity (1905) in an analysis of an oscillating electron (such as you get in a vertical “rod” antenna driven by a radio transmitter, for example), Maxwell’s equations may be deduced but with the revelation of a specific relation between electricity and magnetism.
[The value of mathematical equations in physics is that they can be used to predict otherwise unforeseeable phenomena yet to be confirmed, and also help encapsulate systems succinctly. Words, diagrams and pictures alone are inadequate to the tasks before physicists and engineers. Words, diagrams and pictures alone may help assemble and operate an electrical machine, for example, but they are insufficient to design one. Physics (and science generally) is phenomenology, and electricity is a phenomenon. Nevertheless equations are not Reality itself, merely representing the current state of scientific hypotheses in a field of enquiry. “All science depends on codes of many types. Reality can always outflank codes. We are only human. Ask any doctor or engineer.” 🙂 Ed.]
Electricity “is” (but is also “more than” .. – see Gödel’s Incompleteness Theorems) all the things we know in the formats of words, diagrams, pictures, experiments, experiences and equations about the phenomenon called ‘Electricity’.
The underlying relativistic relationship means it is only necessary to specify the behaviour of an Electric Field (E) to completely determine the behaviour of what Maxwell had to call a separate Force – Magnetism – (though related by Maxwell’s Equations). The related Magnetic Field is often referred to a as H, but also called B. That’s correct .. Electricity and Magnetism are both parts of a single physical phenomenon, when understood in the light of relativity.
Following the “Wu Experiment” (1956), Glashow, Salam and Weinberg later went further, helping physicists to understand that while “current-temperature” Weak Nuclear interactions are distinct from Electromagnetic Interactions, at around temperatures of 1015K (very close to the Big Bang) the Electromagnetic Forces and the Weak Nuclear Force become unified, in the so called ‘Electroweak Force’.
Aside from this Electroweak Force, the remaining Universal Forces are Gravitation and the Strong Nuclear Force.
Heat energy and Mechanical forces (including Sound, Pressure, Stress and the Work done by a Heat Engine) are actually enabled only by the interaction of these 3 fundamental forces (or really 4, at our temperatures, the Strong Force, the Weak Nuclear Force, the Electromagnetic Force and Gravity), supporting the atoms, crystals and molecules of matter, and giving the appearance of large scale surfaces and bodies of varying strengths and characteristics, etc, or relying on the physical and electrochemical properties of substances in flows and reactions.
If we leave out Gravity, then we have the Strong Force which is confined to an electrically “positively charged” and “heavy” nucleus, binding neutral & positively charged particles very tightly (compared to the lightweight and “negatively charged” electron clouds, relatively distant from the nucleus). Outside of the Strong Force, whose region of influence is restricted to the tiny nuclei at the centre of atoms, all chemical, electrical, electro-chemical and mechanical properties of materials (atoms, ions, molecules and crystals) and inter-material forces are fundamentally electrical in nature. The polar nature of electricity determines a lot when it comes to the form of the equations governing the orbits of the negatively charged electrons around the positive nuclei, which mediate the forces between molecules, atoms and crystals. The Gravitational force is comparatively weak with respect to Electricity on Earth, yet near a Black Hole, Gravity wins .. for a while ..
Astronomical history began with the formation of photons, electrons and quarks during the Big Bang (13 billion years B.C.E.). There was then a phase (mysteriously) where the ‘Higgs Field’ came into effect, conferring mass on quarks and electrons (but not photons). Next came the formation of atoms and ions in galactic clouds, and then in stars, (with quarks forming protons and neutrons in atomic nuclei, their electrons eventually coming to orbit in strict patterns, first uncovered by Chemists). Also see the Quantum Chromodynamics of the Strong Nuclear Force, quantifying the way those galactic forces have managed to bring neutrons and protons together in the nuclei of atoms, with attracted electrons bound to them, so that most of the universe’s MASS has “condensed” from the ENERGY involved in the stellar-forced binding of quarks together to form protons and neutrons in the nuclei of atoms (ie where galactic/stellar gravitational forces bring matter into enough proximity to cause the attractive Strong Nuclear Force to come into effect). The fundamental particles associated with the Strong Force are called Gluons. Those Associated with the masses of quarks and electrons are called Bosons. It has recently been discovered and confirmed that “Gravitons” – being the fundamental particle associated with Gravity – exist. There are actually 17 particles in the current Standard Model of Matter, in addition to Gravitons & Phonons (see below).
Einstein’s authority, scientifically, for bringing the new so-called “transformation metric” of a non-Euclidean space, (a foundation of Electricity as much as Relativity) into his theory of (universal-scale) physical reality, (the famous 1/√[ 1-v2/c2 ] factor) came only after Hertz’s confirmation of the existence of the radio waves, that had been predicted originally by Maxwell himself. It also required the putting to rest of the concept of an ‘Aether’ as the medium which transmits light in the universe. Einstein was able to confidently assume that light (electromagnetic radiation) is conveyed directly along rays embedded (as it were) in space-time itself. There is no medium besides. This was conclusively demonstrated in the Michelson- Morley Experiment.
“In physics, Lorentz transformations became [well] known [by Physicists] at the beginning of the 20th century, when it was discovered that they exhibit the symmetry of Maxwell’s equations. Subsequently, they became fundamental to all of physics, because they formed the basis of special relativity in which they exhibit the symmetry of Minkowski spacetime, making the speed of light invariant (as demonstrated in the Michelson-Morley experiment) between different inertial frames. They (the Lorentz Transformation Equations) relate the spacetime coordinates of two arbitrary inertial frames of reference with constant relative speed v. In one frame, the position of an event is given by x,y,z and time t, while in the other frame the same event has coordinates x′,y′,z′ and t′.” (See link above). The work done on what came to be known as Lorentz Transforms was crucial to Einstein’s ideas.
Architects, (non-Electical) Engineers and Builders work in Euclidean Space. Einstein, Lorentz (et al) broke that mold.
Light and Radio Waves, Photons, Black Bodies, Planck’s Experiment, Einstein’s ‘thought experiments’, Electrons, Quantum Mechanics, Phonons, Mass, Energy, Bosons and Gravitation
All of this opened the door to non-Euclidean spaces, previously mathematical curiosities only, when the discrepancies between Newton’s Euclidean foundations and scientific reality began appearing, at first here in Electromagnetism (but also in Astronomy and Cosmology). Ever since the early 1900’s there were no “right angles” and the parallel lines (that never crossed according to Euclid’s Fifth Axiom of Geometry) now crossed, more like meridians of longitude on the earth’s surface, than the parallels of latitude, except immersed in the cosmos rather than confined to the earth’s surface. By the way, Einstein simply needed to apply two traditional conservation principles (although with a relativistic flavour) to a situation in a thought experiment where a single photon collides with a mass at rest (zero acceleration). By equating the Kinetic Energy before (with the mass considered at rest, all the Kinetic Energy of the system under consideration is the photon’s) to the system’s Kinetic Energy after the collision, and requiring the mass to be a perfect Black Body, thus absorbing the photon entirely), but also utilising the relativistic conservation of momentum principle, he is able to easily demonstrate that the Total System Energy, (of the Black Body plus photon absorbed, now all slightly “boosted” – moving – because of conservation of momentum, and the fact that the photon has some momentum), E= mc2. The result sort of ‘drops out’ of the relativistic conservation principles, when you solve the equations representing the 2 Principles simultaneously. The same principles (Conservation of Momentum and Energy) are applied in modern particle colliders. In Newton’s Mechanics there is an ancestral set of Principles. The main point is to remark at the way Relativity has unified Electricity and Magnetism (and, to a certain extent, the Weak Nuclear Force), as well as unifying our concepts of Mass and Energy. :: Euclid’s Fifth had to fall to make way for this new knowledge.
E & H are examples of dependent “Vector Fields”. At every point in a vector field there exists a force on a body with magnitude and direction ie a vector. Another example of a vector field is Earth’s familiar Gravitational Force Field (approximately pointing to the centre of our planet). Fields of temperature, mass or energy values are examples, on the other hand, of “Scalar” fields. Multi-dimensional “Tensor fields” exist in physics (eg Mechanical Stress in solids and viscous fluids, and space-time curvature Tensors in General Relativity).
.. Just for clarity, Special Relativity (1905) is concerned with reconciling physics to the space-time transformation ‘metric’ revealed in Hertz’s Experiment’s verification of the existence of radio waves, but predictable from Maxwell’s equations before him. A ‘metric’ here refers to finding a formulaic way we may consistently model relative velocities, and other physical properties, between 2 observers traveling separately in space-time, in “light” of the fact that there is really nowhere to be taken as a zero velocity point and that we may only consistently measure velocities relative to the local velocity of light (radio waves equally). Einstein discovered the metric inherent in Maxwell’s equations, relating to electromagnetism, (building on earlier work completed by Hendrik Lorentz) and argued that there can be only one metric in this universe, and that therefore Newton, a man rumoured to have walked out of his first opera, was wrong (in 1687) to assume a Euclidean ‘orthogonal’ space for physical reality, which naturally seemed to separate time from space.
(As a very young boy, around the age of 4, I had heard that radio waves and light waves were both parts of the same phenomenon. I was at my Grandfather’s leather factory one day and I decided to test to see whether I could actually outrun the waves, and starting in one building with the radio there playing something, I attempted to run as fast as I could to “beat” the same waves to the radio in the second building. Needless to say I failed miserably. I decided not to take the initial disappointment too seriously yet, however, as I reasoned the antenna transmitting the signal could have been situated such that I would catch the waves by running the opposite way, ie from the second building back to the first. So, listening very carefully to where the radio was “up to” just before I raced off, I tried to run even faster. I was of course further disappointed that I seemed to have no chance running either way, of outrunning “light”. That night I asked dad how fast light travels. He told me it was 186,000 miles per second. I suddenly understood a lot more about radio and light.)
The metric of Maxwell’s equations revealed skewed axes in 4 dimensions where the upper limit to universal velocities is the speed of light. By progressing from this point, Einstein was able to unite our concepts of Electricity and Magnetism as well as uniting Energy and Mass. Our concepts of Energy and Mass were unified by Einstein applying the Conservation of Energy Principle followed by the Conservation of Momentum Principle (in special relativistic formats) to a collision (in a thought-experiment) where a so-called Black Body absorbs a colliding photon fully. By equating the total Energy and Momentum prior to the collision with the total Energy and Momentum after, and utilising the Planck Relation for Energy of a photon, E = hf, Albert was able to show with simple algebra and calculus that the total Energy of that system post collision E = mc2, where m is the mass of the black body (the photon has zero mass itself) and c is the velocity of light. An essential part of the reasoning involved, is based on the concept of deriving a quantised object’s momentum from its “kinetic energy” (KE = h.f), despite zero mass (ie a photon’s momentum).
Planck’s experiment some few years after Hertz’s experiment, revealed to Einstein an interpretation of results stating light (and thus radio waves) is actually composed of light “particles” or quanta called photons. Albert deduced this publicly in a paper in 1904, giving birth to a field of research still continuing today, called quantum mechanics. As noted above, Planck’s 1904 experiment helped Einstein deduce the relationship between Energy and Mass of a body by giving him the concept of a “photon”, which he invented himself to account for the results of Max Planck’s experiment. Due to the incompatibility of the Classical Wave based theory of light with a particle based theory at the time, the effort to find the linking equations between a new ‘quantum mechanics’ and the classical theory (adequate until Planck’s Experiment) led scientists to a probabilistic theory which Einstein always disowned. Incidentally Erwin Schrödinger, one of the inventors of quantum mechanics, also believed that there exists a deterministic underlying continuous theory possible in physics. The possibility that an event could happen simultaneously in many spaces is required for the theory to work. “Strings”?
The answer in any case seems to lie in the success Schrӧdinger had in 1926, (and Werner Heisenberg at the same time), with an approach that replaced the (classical) value for total field Energy (E) with (quantum) hf in a single frequency (laser) light field’s (classical) theoretical “Work Function”. The basic experimental relation E = hf (Energy of a photon = its frequency f multiplied by Plank’s constant h) is reliable and verifiable by anyone who wants to repeat Planck’s Experiment. The result of Schrӧdinger’s substitution in the classical field equation, gave only the quantum field equations for the particular case of a “geometrical optics” light field. Schrӧdinger had to uncover a Partial Differential Equation (which would be the General Quantum Mechanical Wave Equation) whose solution space allowed these “Quantum Mechanical-Geometrical Optics” Field Equations as solutions, even though they were at that stage particular to a generalised (massless and unbound) photon field (not to other particles, such as electrons). The answer he sought had to approach the behaviour of the Classical Wave Equation, as the value of Planck’s Constant is made to approach zero. (This requirement is a way of mimicking the idea that Energy in a classical light field is taken as independent of frequency). The equations required also had to provide solutions which closely match observed experimental results for other particles when applied as theoretical models of those particles in experiments.
In 1926, Schrӧdinger, and at the same time, independently, Heisenberg, succeeded in finding slightly different versions of the same Equations. The results showed the required tendency towards classical behaviour (say, at more human-sized scales) as Planck’s constant was forced towards zero (in the theory).
The theoretical Schrӧdinger and Heisenberg treatments also matched actual results from practical energy absorption spectral experiments with Hydrogen, in a model of the Hydrogen Atom, with its single electron as a wave-particle, obeying the new Wave Equation, absorbing energy in stable quantum stages as predicted. This meant the Quantum Wave Equation applied to electrons as well as photons (theoretically).
Then, the wavelike nature of electron beams themselves was experimentally established when Electron diffraction, in fact, was observed (1927) by C.J. Davisson and L.H. Germer in New York and by G.P. Thomson in Aberdeen, Scotland, thus supporting an underlying principle of quantum mechanics, “Wave Particle Duality”.
The Quantum Mechanical Wave Equation also applies to Sound/Pressure/Stress waves in the limit, and thus there exist “Phonons” or stress/pressure particles, since the Principles of Energy Quantisation apply equally to sound/pressure/stress energy, including both types of seismic wave: the direct wave (or primary shock) and the shear waveform (or aftershock); similarly to the sonic boom and aftershock occurring with supersonic objects (where the shear wave also arrives last). These waves (all sound/pressure and stress) are carried in the final result by phonons in rays spreading out from the source(s) at the speed of sound in the medium. (Incidentally, heat energy is also, at the submicroscopic level, a quantised phenomenon, being stored and transferred in the form of phonons no different to sound/pressure/stress waves. The exception to this is radiant heat energy, carried by photons, which then interact with matter to result in Heat as phonons. It’s all about vibrating matter with phonons. There is no stress or pressure or sound or non-radiant heat in empty space.)
The full classical treatment of mechanical waves involves the three dimensional Stress Tensor T in a space and time continuum (Euclidean). A Tensor Field in space has 3 perpendicular “normal” or “principal” stress values at a point and 3 perpendicular “shear” stress values at the same point. There are thus 6 stress values per point in space in the Tensor Field. Shear corresponds to rotation or torsion (the aftershock) and normal refers to tensile or compressive forces (the primary wave). The elements of the stress tensor at each point in the spatial field vary (“vibrate”) in time and space. A “solution” to a particular Tensor Wave Field Differential Equation (the particular Classical Wave Equation in the medium or continuum) is required as a “function” specifying the 6 values of stress at every point in the spatial & temporal field. The treatment needed to involve and connect the molecular, atomic and sub-atomic levels (Quantum vibrations and phonons) to the higher level continuum mechanics treatment (classical stress waves – think dynamic structural or fluid loading and forcing) is more complicated.
In the early days of Quantum Mechanics, everything was done in pseudo-Euclidean Spaces (although involving “imaginary” numbers and “complex planes”), however Paul Dirac was influential in pushing back boundaries towards reconciling General Relativity with Quantum Mechanics.
Albert’s Theory of General Relativity (1915) had gone further than his Special Theory, as Special Relativity still rested upon a ‘flat’ or ‘inertial’ (non-accelerating) space-time cosmology, whereas the General Theory concerned itself with further revelations, now about Gravitation: specifically, that it is linearly related to the local magnitude-and-direction-of-curvature (a vector, perpendicular in space-time, to the tangent hyperplane on the curved surface of our universe at the local point to be measured) of ‘our’ space-time inside a “hyper-volume” (possibly a multiverse) of a larger number of dimensions (larger than 4, but otherwise unspecified) which is outside our universe. Much of the reasoning around the nature of Gravity and Acceleration came down to the question as to why should a body’s inertial mass be identical to its gravitational mass? In Newton’s terms: why should the ‘m’ in F = ma (where ‘a’ represents a real temporal rate of change of velocity) be the same as the ‘m’ in F = mg (where g represents a potential, giving a body’s “weight” in a field of Gravity)? There is no doubt that the identical “m”s are reliable facts, so backtracking from truth to cause was what was called for. Another little human feat accomplished by Albert Einstein.
As a result of Einstein’s deliberations and reasoning, he was able to develop equations enabling him to accurately predict the amount by which the planet Mercury would appear earlier than astronomers expected (in a Euclidean Space with no Gravitational influence on the path of light rays) from behind the Sun due to the curvature of Space-Time – and the light rays in it – caused by the Sun. See Geodesics in General relativity.
This actual curvature of space-time is caused by the presence of matter (such as the earth or the sun or a pencil or a galaxy), and Einstein gave equations which accurately predict the behaviour of our solar system as well as real galaxies, contrary to Newton’s inconsistent predictions. (Although, Einstein was never good with pencils .. they never weigh enough and they move too slowly – moreover it was Newton’s Mechanics that enabled the development of the fundamental Impulse/Momentum Equation of Rocketry (refer to our page Computers as Machines regarding “the girls” who programmed one of the first computers – not using software as it did not exist – and the solution they produced) and the now ubiquitous Finite Element Method of Stress and Strain analysis, for the Apollo rocketships, that took men to the moon successfully. In addition, any navigator since well before Newton would have been happy to plot the course to the moon given the available technology in 1969 – as it was done in Euclidean space, using regular timekeeping devices and astronomical maps based on observations little different to those of prior centuries). Incidentally there was a similar “zeitgeist” moment between Isaac Newton in England and Gottfried Leibniz in France (as for example between Schrӧdinger and Heisenberg), where both men appear to have invented the same ‘Calculus’ ideas at similar times, but developed them slightly differently. Actually Leibniz’ formulation lends itself more readily than Newton’s to Finite Element Analysis, and to many other areas of physics and engineering, as it employs generalised co-ordinates from the outset, whereas Newton’s analysis relied on purely physical quantities, such as distance, time, velocity, acceleration, mass, force, momentum & impulse, work & energy.
Thus we have started asking about electicity (and magnetism) and done a series of experiments to show that on the one hand electrical phenomena mediate our lives and environment (when you add gravity to the mix) almost entirely due to their importance in the various physico-chemical bonding properties of matter (in its various types). However in outer space, and in the absence of matter, light exists in a classical fashion as well as a quantum fashion, and light is electromagnetic waves. The quantum side of things is represented in light by photons. They are massless but not momentumless. On the classical side the Wave Equation of Classical Real Partial Differential Equation Theory used to be considered the defining expression of light (and sound) transport. The Quantum Mechanical Complex Partial Differential Equation is opposed to the latter. In quantum mechanics allowance is made for waves that also behave like particles in the Principle of Wave-Particle Duality. In the Classical theory there is no such allowance. This does not stop the classical wave equation from being very useful in areas ranging from radio-astronomy to communications. In terms of the engineering of electrical power systems, the quantum mechanical wave equation is useful for the ways in which it provides an explanation of the quantum flows occurring in Generators and Motors which nevertheless are handled quite adequately by the Classical Theories of Electromagnetism as embodied in Maxwell’s Equations. It is worth remembering that the “metric” of space-time that is implied by Maxwell’s Equations is the foundation of Special Relativity. General Relativity is almost entirely about Gravitation.
It is also worth remembering that despite the way the yanks push Thomas Edison as some sort of foundational influence in the classical analysis and practice of Electrical devices, Edison was quite dull mathematically, and it was in fact Nikola Tesla, a Croatian man, who invented our modern Alternating-Current Generators and Motors, and the entire Three-Phase Electricity production and distribution system, in use the world over. Meanwhile Edison laboured with Direct-Current devices, being unable to grasp the theory necessary to develop Alternating Current devices. All Edison’s company, General Electric, could do was to wait until Tesla’s patent expired and then copy his work. Ask about the americanisation of History.
However, much has happened in physics since publication of the General Theory of Relativity in 1915 .. starting with quantum mechanics, Schrӧdinger, Heisenberg, Dirac and Stephen Hawking’s life devoted to reaching past Einstein (predicting the existence of Black Holes, now confirmed, and even their “evaporation” with the return of matter and “information” to this universe!) .. go search .. and remember that although God may not play dice, people may be required to, in physics, because the human intellect needs a way to comprehend wave-particle duality, and many other probabilistic phenomena, such as the question of how could the particle, the Higgs boson, thought to be responsible for conferring “mass” on certain sub-atomic objects (such as electrons and quarks), be related to the universal curvature-of-space-time tensor in a generally-relativistic quantum mechanics, or, how would a Higgs boson be related to Gravitons (the quantum particle, existence confirmed, associated with Gravitational disturbances or gravity waves), as emitted by the action of pulsars?. Moreover, how can a universe of fundamental particles be modeled in a completely relativistic manner, given that each particle would need to have its own time and space co-ordinates, and physical properties, relative to every other particle, and these particles and their properties would all need to be enumerated in a final analysis, with motions specified relative to each other, within a non-inertial or curved cosmology. Could armies of Quantum Computers even get close to completing such an astronomical project? What would we specify as Initial and Boundary Conditions? (see Herbert Goldstein et al.: Classical Mechanics).