.. and the People who made them possible
Since as early as 19,000 BCE there is evidence that people used objects to assist in computation and arithmetic. The following wiki article is a summary of the known history of computing before 1950, however the activities of Alan Turing have been, as often, minimised and excerpted.
Notice that the concept of zero has not existed for so long, and although it existed in ancient Egypt, it was not always accepted in many parts of the world, sometimes as a proscription of religion and society. If we consider the English usage of the word for “an amount of zero” we might choose “none”. In English usage, the way to express “none” of a collection is to say “There is none of them”, however if a possible expected number were more than one, it is OK to say “There are none of them”. Notice that in either case, “none” is either singular or plural. This gives no linguistic space for an “absence” type of grammatical number within the verb “to be”, unless we say “to not be any .. “, or “there is not any”, or there are not any”, where in all cases the Grammatical Number of the verb “to be” is still either singular or plural. Grammatical Number is restricted in English, as in many other languages, to either singular or plural. In order to cope with numbers, a language must allow an arithmetic and mathematical system that deals with numbers in their own right, beyond the strictures of Grammar, linguistic and lexical logic, and Religion. Zero’s History
The English mathematician
George Boole (1815-1864) invented mathematical, or symbolic, logic and uncovered the algebraic structure of deductive logic, thereby reducing it to a branch of mathematics
In 1931, Kurt Gödel (1906 – 1978) of Vienna University, Austria, published a paper on a universal formal language based on arithmetic operations. He used it to encode arbitrary formal statements and proofs, and showed that formal systems such as traditional mathematics are either inconsistent in a certain sense, or contain unprovable but true statements.
This result is often called the fundamental result of theoretical computer science. (see Computing before 1950, above). Previously, David Hilbert had bravely attempted to place logic and mathematics on a consistent footing, however Gödel’s work effectively overturned Hilbert’s entire project.
The result points towards the idea that problems are not able to be solved by numerical computations (or for that matter, any system) in a way that is independent of the program directors’ plans and definitions. There is no general arbitrator of “truth”, beyond some simple and restricted arithmetical and mathematical systems, and even then, only because the “experts” agree on what is “true”. Arithmetic and Logical “truth” are hardwired into the Arithmetic/Logic Unit (ALU) of a computer, just as operational constraints are hardwired into the entire Central Processing Unit, both using “logic gates”.
Division of numbers in general would be impossible without a lot of planning and defining. Division by zero is an example of a concept or attempted action which totally befuddles any mathematical system or computational machine. Dividing by zero has had to be “outlawed” as providing an undefinable answer (“infinity”), but is zero divided by zero equal to one? No! – it may equal zero, on the other hand it may be undefined also, we are instructed. Further, when you dive into the concepts of limits of sequences and series of numbers, the waters can be very treacherous, appearing to allow zero divided by zero to be many possible definable results, including “unity”.
These concepts of limits of sequences and series have been necessary to progress, in Newton’s and Leibniz’ Calculus Opuses, to actually defining the “derivatives” and “integrals” of (differentially “linear”) functions in terms of other known or defined functions.
If we attempt to subtract a number larger than a first number from that first number, we need to have “negative” numbers defined for us, and rules for their manipulation to be in place, before we can proceed.
The same situation exists when mathematicians decide to define
√(-1) = i (“j” to engineers).
Normally, in “real mathematics” the square root of -1 is undefined (similar to the answer when dividing by zero), as even negative numbers, when multiplied by themselves give positive answers. So there would appear to be no such thing as a number which, when squared gives a negative numerical answer. In “complex mathematics” much is then possible in terms of analysis of systems of “linear partial differential equations” (in Calculus and Variational Calculus), by defining “i = √(-1)” ( j = √(-1) ), underpinned by the mathematics of “Complex Analysis”, which is the basis of much of physics and engineering. (Notice how one of the roots of:
∛(-1) = -1 ; a Real (not Complex) Number,
yet:
∜(-1) = (1+i)/√(2),
or = (-1+i)/√(2),
or = (1-i)/√(2),
or fourthly,
= (-1-i)/√(2) )
– all Complex Numbers)
The experts make fundamental decisions for us, to guide us, and to ensure internally consistent systems. Maybe we learn to understand why. “Why does 1+1=2”, children might ask. “Because I said so”, seems to be the answer to some at first.
In order to follow the reasoning about “∜(-1)” you need to understand that the roots of unity are such that there are 2 square roots of -1 (i, -i), 3 cube roots of -1, 4 fourth roots of -1, etc. The method of “calculating” them involves the base of natural logarithms
“e” (a so-called Transcendental Number), the notion that the analogue of the Real Number Line in Real Analysis is the Complex Plane in Complex Analysis, and Euler’s Formula in complex analysis (“e” comes from “Euler”, although Jacob Bernoulli actually discovered the number while investigating continuously compounded interest). See z4 = -1 (z complex).
The previous generations of “experts” are needed to guide us, as there are inherent inconsistencies as you can see, even in performing simple divisions and subtractions of numbers. More complex systems hide more devious traps and inconsistencies, although the potential to obtain “infinity”, “undefined”, and sometimes “zeros” in results (eg a mathematical “function” disappearing at a “zero” and causing related logic to send a register towards infinity, or otherwise making the program inoperable at that point) accounts for many logical conundra.
Generally, internal inconsistencies, “poles” (“infinities”), some zeros, and attempts to use undefined or forbidden objects as input, or to produce undefined or forbidden results as output, give rise to the need to “throw” errors to be “caught” and handled by the software.
Alan Turing (1912-1954)
is often called the father of modern computing. He was a brilliant mathematician and logician. He developed the idea of the modern computer and artificial intelligence. During the Second World War he worked for the Allies breaking the enemies’ codes and Churchill said he shortened the war by two (up to four) years.
Although he did not manage all this alone (see Bomba, Bletchley Park, Bombe, Colossus), Turing was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing Machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science and artificial intelligence.[10] Despite these accomplishments, he was never fully recognised in his home country during his lifetime, due to his homosexuality, which was then a crime in the UK.
Sadly, in actions which must have been devastating to him, Alan was prosecuted in 1952 for homosexual acts; the Labouchere Amendment had mandated that “gross indecency” was a criminal offence in the UK. He accepted chemical castration ‘treatment’, with DES, as an alternative to prison. Alan Turing died in 1954, 16 days before his 42nd birthday, from cyanide poisoning. An inquest determined his death as a suicide, but it has been noted that the known evidence is also consistent with accidental poisoning.
John von Neumann (1903 – 1957)
In 1945, mathematician
John von Neumann undertook a study of computation that demonstrated that a computer could have a simple, fixed structure, yet be able to execute any kind of computation given properly programmed control without the need for hardware modification. Von Neumann contributed a new understanding of how practical fast computers should be organized and built; these ideas, often referred to as the stored-program technique, became fundamental for future generations of high-speed digital computers and were universally adopted. The primary advance was the provision of a special type of machine instruction called conditional control transfer–which permitted the program sequence to be interrupted and reinitiated at any point, similar to the system suggested by Babbage for his analytical engine–and by storing all instruction programs together with data in the same memory unit, so that, when desired, instructions could be arithmetically modified in the same way as data. Thus, data was the same as program.
The von Neumann architecture is a design model for a stored-program digital computer that uses a processing unit and a single separate storage structure to hold both instructions and data. It is named after the mathematician and early computer scientist John von Neumann. Such computers implement a universal Turing machine and have a Sequential Architecture. The terms “von Neumann architecture” and “stored-program computer” are generally also used interchangeably.
A stored-program digital computer is one that keeps its programmed instructions, as well as its data, in read-write, random access memory (RAM). Stored-program computers (eg von Neumann’s work on EDVAC which led to a proposal by Turing to develop the Automatic Computing Engine – ACE) were an advancement over the earliest computers of the 1940s, such as the Colossus and ENIAC, which were programmed by setting switches and inserting patch leads to route data and control signals between various functional units, having separately stored program instructions and data. In the vast majority of modern computers, the same memory-space is used for both data and program instructions.
System Design:
Sequential Machine Architecture
A fundamental type of computing device is called a “Combinational” Machine. Basically such a machine can only be programmed to output one permutation of its inputs per machine cycle. A set of bits or bytes as input will generate the same “length” set of bits or bytes in the output, just reordered as per the current instruction. This was basically the first incarnation of a code cracking machine at Bletchley Park during World War 2 (though beginning earlier, in Poland, with the Bomba in 1938). It could generate a full set of permutations of a coded message (in German), one per “instruction” (ie permutation of symbols), detecting and removing at the same time a lot of extraneous a priori ‘impossible’ keys. Effort had to be made to reverse engineer the German encoding machines for the various communication media of the time, by analysing their behaviour from the outside as ‘Black Boxes’, in an electro-mechanical crypto-decrypto strategic war.
Coding by the Germans basically relied on a scrambling of letters of the message according to some key. The key had to be known to decode the transmission. Initially the only way the Allies had of unscrambling these codes was to progressively work through each permutation of the possibilities (the various mappings, of the full set of symbols in German coded messages, onto itself) until the output from the “Combinational” Machine gave a message which made sense in German. The key for the period was then recognised by the hard example of a single decoded message which could be read, and made sense in German. The key was simply given by the ‘instruction’ being executed at the time of output of the readable message.
To work through these permutations of letters in a message took a long time and was labour intensive because in its first incarnation (as Turing’s Bombe in 1941), the machine was “programmed” by flicking switches and inserting cables via plugs, by hand, to generate the different combinations, to say nothing of the multitude of human translators (up to 10,000 by war’s end) needed to discover the successfully decoded messages.
After a history of tens of thousands of years of manual and mechanical computing, the essence of the work on taking a “Turing Machine” from theory into practice was embodied in the next generation British GPO’s Colossus in 1943, and ENIAC, completed in 1945, and built by J. Presper Eckert and John V. Mauchly. The machine was improved upon by von Neumann, Eckert and Mauchly in EDVAC, which led to demonstrating the method by which a machine (at the end of World War 2) could store it’s own set of Instructions electrically and step forward sequentially in time from one instruction to the next (but where jumps to different positions in the instruction ‘stack’ are possible on the fly), and amazingly (at the time) able to load its own “next” instruction from storage (memory) into a register and execute it.
The “inputs” here can be taken as given by those from the outside world, plus the machine’s own program instructions. From a purely machinic point of view, the outputs are a re-combination of these inputs but with output-to-input feedforward included, such that outputs from previous cycles can be fed forward and added alongside future inputs to influence the next cycle, just as previous output is influencing the current instruction cycle. A sort of Combinational Machine that can step forward in time, with filterable feedforward of outputs into future steps alongside other inputs (especially the next instruction). Or a “Programmable Sequential Machine”.
From Hardware to Software:
Although ENIAC was a revolutionary machine, as one of the “girls” themselves observed “it was like the situation with Jet airplanes. The new machine was one thing as Hardware, but was nothing without newly developing pilots to fly it.”
The “pilots” referred to were a group of 6 elite women, so-called “computers”. It was these girls who developed the first ever connections and processes necessary to actually solve computable problems on ENIAC in a sink-or-swim effort towards the end of the war. They were the first programmers. The first ‘sort’ algorithm for example (and so much more) was developed by these girls, without any tools besides screwdrivers and such, especially no software tools, for “software” did not exist. They used logic diagrams and machine specifications, set switches and developed everything else themselves, including process specifications, such that the engineers found they could leave the “debugging” of ENIAC to the “girls”. The girls produced the solution for a ballistics trajectory, involving a complete specific partial differential equation solution, which ENIAC ran in only seconds. With a desktop calculator of the time it would take 40 hours in human labour, and be prone to errors. Although it took time to develop the program, once developed it could solve for any particular ballistics trajectory simply by changing the initial and boundary conditions (target range, atmospheric conditions, drag characterisics, missile tare weight, etc). The logic diagrams given to the girls were based on a ‘finite difference’ method. This made it possible to digitise (or ‘discretise’) the otherwise naturally continuous differential equation, making it possible for a computer to closely approximate the solution for the trajectory, in time and space, including impulse required and fuel load, using only logical and arithmetic operations on a fine ‘mesh’ network of space and time co-ordinates, between source and target.
For some time in computing there was no such thing as an Operating System. A program was a single process which entirely controlled (and had to control) everything necessary to get consistent outputs from the entire machine. A lot of behaviour was “hard-wired” in to a computer, but everything not hard-wired had to be specifically coped with in the single-process machine code (imagined as 1’s and 0’s in storage addresses, same as the “data” dealt with by the machine code) of the computer program.
Eventually (first in 1956, with General Motors, as a customer running IBM Hardware) and increasingly, there was the need to bind all the memory elements and storage, all the electonic, electromagnetic, electro-mechanical, optical and audio elements of a system, so that the elements can work properly together, taking the housekeeping duties away from the users’ programs and allowing humans to better utilise computers. This is the role of an Operating System, at the most basic level. At this juncture was created the distinction between Systems Programmers and Applications Programmers. Although the computer as a device relied historically on the development of the concepts around a Central Processor Unit based on the requirements of Sequential Architecture, there are many “peripheral” devices that need to be included, to which some repetitive or separable activities can be delegated eg in a “driver”. For example a keyboard, the modern mouse, speakers, a screen, random access memory, hard disk drive controller, graphics processor, sound processor, serial ports, etc. The Operating System “glues” these components together and presents an interface or interfaces to the outside world. It also generates and runs processes on behalf of users and tracks them all. In Unix and similar systems processes can be bound to memory, devices, ports, pipes, queues, semaphores and sockets, for example.
On top of the operating system are further ‘layers’ of Software which, together with the operating system, cover everything a good housekeeping system has to do in a computer, from keeping track of the internal processes which are running and communicating with the outside world safely ,to performing useful computations and communicating with the user. It is worth noting that Sequential Machines (Computers) can only perform one complete instruction cycle at a time per processor core – albeit very quickly, allowing rapid “interleaving” of multiple processes. Nevertheless each instruction brings the machine from a previous single state to the next single state, only giving the impression of total continuity (usually!). The operating system swaps between executing different processes so that all active processes are covered. This behaviour is called multi-tasking. Linux, MacOSX, Windows, iOS, Android are operating systems. At any one time a processor is either in one state or transitioning between its previous state to its next state. This is the situation that hardware and their resident operating systems deal with continually, at ferocious speed.
From Big to Small:
Austrian-Hungarian physicist Julius Edgar Lilienfeld first filed a patent on a solid-state device in 1925 which, although it could not be built at the time, has later shown to be a working design for an electronic amplification device. Apparently independently of this work, in 1947 a team under William Shockley at Bell Labs developed the first of two types of Transistor (the Point Contact Transistor and also the Bi-Polar Junction Transistor in 1948). The ubiquitous “MOSFETs (Metal Oxide Semiconductor Field Effect Transistors) that actually have become the most common electronic device in the world, were invented by Atalla and Kahng at Bell Labs. They fabricated the device in November 1959. These MOSFETs became the basis for “chips” of logic gate systems making up the equivalent in Transistor Technology of Valve based Computers of the second and third generations. The first single Transistor (before Integrated Circuits) was already much smaller and consumed less power than a valve.
With the processes discovered for embedding connected parts (micro-transistors) into silicon wafers, and the relentless miniaturisation of those parts, the development of mini computers, followed by the hobby microcomputers of the 1970’s (with “Computers on a Chip” and sometimes even Audio Cassette tapes acting as memory storage) had begun.
“ The personal computer industry truly began in 1977, with the introduction of three preassembled mass-produced personal computers: Apple Computer, Inc.’s (now Apple Inc.) Apple II, the Tandy Radio Shack TRS-80, and the Commodore Business Machines Personal Electronic Transactor (PET). “
“ This was made possible by major advances in semiconductor technology. In 1959, the silicon integrated circuit (IC) chip was developed by Robert Noyce at Fairchild Semiconductor,[11] and the metal-oxide-semiconductor (MOS) transistor was developed by Mohamed Atalla and Dawon Kahng at Bell Labs.[12] The MOS integrated circuit was commercialized by RCA in 1964,[13] and then the silicon-gate MOS integrated circuit was developed by Federico Faggin at Fairchild in 1968.[14] Faggin later used silicon-gate MOS technology to develop the first single-chip microprocessor, the Intel 4004, in 1971.[15] The first microcomputers, based on microprocessors, were developed during the early 1970s. Widespread commercial availability of microprocessors, from the mid-1970s onwards, made computers cheap enough for small businesses and individuals to own.
In what was later to be called the Mother of All Demos, SRI researcher Douglas Engelbart in 1968 gave a preview of features that would later become staples of personal computers: e-mail, hypertext, word processing, video conferencing, and the mouse. The demonstration required technical support staff and a mainframe time-sharing computer that were far too costly for individual business use at the time.
Early personal computers—generally called microcomputers—were often sold in a kit form and in limited volumes, and were of interest mostly to hobbyists and technicians. Minimal programming was done with toggle switches to enter instructions, and output was provided by front panel lamps. Practical use required adding peripherals such as keyboards, computer displays, disk drives, and printers.
Micral N was the earliest commercial, non-kit microcomputer based on a microprocessor, the Intel 8008. It was built starting in 1972, and a few hundred units were sold. This had been preceded by the Datapoint 2200 in 1970, for which the Intel 8008 had been commissioned, though not accepted for use. The CPU design implemented in the Datapoint 2200 became the basis for x86 architecture[16] used in the original IBM PC and its descendants. “[17] (Wikipedia)
Network as Computer
More recently, with the development of the World Wide Web in 1990 (Tim Berners-Lee), has come the idea that “The Network IS the Computer” (for example Java, JavaScript & the ‘Elastos’ System, and other Blockchains), recognising the modern state of connectivity and its limitations (and opportunties); it being far slower and less safe to communicate between devices than within one, though the access to external resources more than makes up for this. It is within this market that we operate as SaaS Providers concerned with Networked High Level (Application) Software as opposed to System Level development. ‘SaaS’: Software as a Service. (SaaS: a method of software delivery and licensing in which software is accessed online via a subscription, although parts of SaaS systems may be installed on customers’ devices.)
Another Type of Computing
Analogue Computing:
There exists, in Engineering, an alternative to digital computing, which is used to predict and model the behaviour of machines or systems in advance of production – ie in the design phases. By assembling fluid flow components (possibly pneumatic or hydraulic) such as pumps, compressors, pipes, valves often with solenoid operation triggered by flow of the working fluid (air or hydraulic fluid), venturies, accumulator vessels, and so called “fluid logic” circuits housed as separate but connected units much like integrated circuits (though much simpler), it is possible to mirror the behaviour of design systems and be able to adjust parameters of the design to investigate stability issues and overall operation. The systems yield continuous analogues (in the fluid mechanical quantities, such as flow rate, pressure, temperature, volume, density, viscosity, frequency) of design variables, as opposed to the discrete and digitised characters and numbers of a digital sequential machine’s (computer’s) output.
All the above fluid-based analogue work, can additionally be done by using electrical or mechanical quantities and components.
In electronic analogue computers, the typical electrical quantities of circuits & components (voltage, current, resistance, capacitance and inductance), are processed in structured assemblages/circuits of Operational Amplifiers to give the representation of (for example) the Transfer Function of a machine, in terms of its Laplace Transforms, determined from the inter-operations of the modeling components as analogues of the system under design. The increased frequencies allowed by the smaller sizes and masses etc of the components in an electrical analogue computer, sometimes mean that it is possible to solve problems much more quickly than using other forms of analogue computers or a digital sequential computer. “Longterm”-running solutions on a hydraulic or pneumatic system can be run in much shorter timespans on an electrical analogue computer.
The essence of “analogue” computing is in the continuous flow of quantities measured with engineering instruments to output results on recorders. As you will see in this reference, the range and history of Analogue Computing are extensive. Analogue Computers.
Virtual Computing:
Technically, Virtualisation was pioneered by IBM and General Electric separately in the early 1960’s. Unix was an early example of a certain degree of multi-user virtualisation, such that each user has their own process and configuration spaces, enabling the sharing of system resources more efficiently. This is the basic idea of all Virtualisation: to share as much as possible of system resources while isolating users in their own “sandboxes”. It is this virtualisation that provides the foundations for developments such as the Amazon Elastic Cloud, and now all other computing “Clouds”, including desktop virtual machines, such as ITOTCCA uses with Multipass/Microk8s and, more recently, MicroStack/Microk8s on top of the Ubuntu Linux Operating System. The MicroStack package provides a Desktop level development cloud suitable for a (powerful) desktop computer workstation.Virtualisation. One of the core concepts in modern Virtual Computing is that of the Software Container. Software Containers.
Enter Quantum Computing:
And we did, in 2019, finally arrive at the commercial birth of a (once again) entirely different type of computing opened up by Quantum Computers.
The usual 2-state system in normal computer registers or memory locations represented by “1” or “0”, is augmented in Quantum Computing. Very fast computations may be performed, making formerly intractable problems solvable, but only for certain classes of computational situations.
So, 70 years after the first IBM Sequential Architecture Mainframe offering in 1949, came the unveiling of IBM’s – and the world’s – first commercial Quantum Computer offering in 2019.
The Future, Artificial Intelligence & Quantum Computing:
There is much consternation from all quarters about the threats posed to humanity by AI. The only thing we can observe with certainty is that in any programming project the Governance of Coding is crucial. This will be even more crucial with AI and the laws, checks and balances that we will need to bring to bear on machines which will eventually outsmart us in intelligence terms. This does not have to mean they will outsmart us strategically or from the point of view of a power advantage unless we allow it. We should be looking to see the dangers of potential or existing “centres of power” in control of AI systems, which increases our responsibilities in Oversight and Governance. We need to be planning ahead .. since AI does drive weapons, and currently Machines are making decisions about whether and when to end people’s lives (eg employing Drone Technology). Then consider the unknown capacities opened up by linking Quantum Computers with AI Systems in terms of the “services” a quantum computer might be able to offer a related AI system.