Pages

20180306

The New Turing Omnibus by A. K. Dewdney


  • Our lives (whether we work with computers or not) are full of algorithms.
  • A finite automaton is said to accept any sequence of symbols which places it in its final state. The set of all such sequences may be regarded as words in the language of that automaton.
  • The language accepted by a finite automaton can always be written as a regular expression.
  • Finite automata are the simplest kinds of computational model widely studied.
  • Any Boolean function ca be defined by means of a truth table, which is merely a convenient way of listing the values of the function for each possible combination of values of its variables.
  • Of course, every simulation, whether scientifically or industrially inspired, must have a goal, and the goal determines what aspects of the system being modeled are relevant.
  • One of the advantages of computer simulation is that experiments that might take days or even months of real time on actual systems can be carried out by machine in seconds, minutes, or (in the worst case) hours of real time. This is because the flow of time is also simulated--in most cases by a simulation clock, a real variable which keeps track of time.
  • The heart of the Monte Carlo method is the generation of new events by means of the inverse-function technique. Virtually any distribution, whether theoretical in the form of a computable function or empirical in the form of a cumulative histogram, can be used.
  • His [Godel] now-famous theorem states that in any sound, consistent, formal system containing arithmetic there are true statements that cannot be proved--statements the truth of which we may know by other means but not be any formal, step-by-step decision process.
  • A standard approach to programming a computer to play a given two-person game is to use a "game tree" in which each node represents a possible position in the game and each branch represents a possible move.
  • Linear random number generator:
    • To start the whole process, an initial number, called the seed, is chosen.
    • The current random number N is multiplied by a multiplier K, and an offset C is added to the product.
    • The resulting number is taken modulo M, the modulus.
  • Since a complex number is actually a pair of real numbers, it represents a point on a plane called, not surprisingly, the complex plane. Not just complex numbers with integer parts but complex numbers with fractional or even irrational parts populate this plane and fill it completely.
  • Program correctness is an issue of increasing importance in a world increasingly reliant on the computer. For some application programs it is not enough to be "sure" that they run correctly; one must know it.
  • One of the most useful data structures ever invented is the tree.
  • The number of places in which two binary words or vectors differ is called the Hamming distance between them. A set of codewords which are all mutually at Hamming distance 'd' or more enables users to detect and correct [errors].
  • At the heart of every computer there is a logic control unit which orchestrates the transfer and manipulation of information between and within thousands of sites where information is stored. Essentially, this unit is an electronic circuit with dozens of input and output lines. It passes incoming information through a network of logic gates, and the particular configuration of gates used determines precisely the function which the logic control unit will perform.
  • The 'pumping lemma' for regular languages tells us that if we select a sufficiently long word from a regular language and "pump" it--as many times as we like--we always get a new word in that language.
  • When a program is run on a computer, two of the most important considerations are how long it will take and how much memory it will use.
  • The worst-case time complexity of an algorithm operating on an input of size N is simply the maximum time that the algorithm will spend on any input instance of size N.
  • The use of a number to be stored as the basis for a computation of its storage address underlies the technique known as hashing.
  • An algorithm with quadratic memory requirements will generally exceed the storage capacity of a computer much sooner than an algorithm with linear space complexity.
  • Studies of the time and space complexity of algorithms tend to concentrate on the question of how fast a given problem can be solved algorithmically. Unless storage requirements are exorbitant, the main question about a given algorithms' efficiency is usually, Can we find a faster algorithm to solve the same problem?
  • If organisms as complicated as plants and animals can evolve in a primordial soup, why can't solutions to a problem evolve in a computer's memory? The short answer is "They can". At least they can if they're given enough time: not eons, but anywhere from seconds to hours.
  • The theory of natural selection states that variability in a natural population (due to mutations and new, sexually produced gene combinations) helps to ensure the population's survival and fitness. When the environment changes, variability in the population may mean that some individuals are better adapted to the new conditions than others. Such individuals form the basis for a new more fit population.
  • In computer graphics, data analysis, and many other applications, one wants a "natural" curve to connect a number of points. [This is a spline curve.]
  • [A] Polyhedral scene, that is, an assembly of solids each of which is bounded by plane faces.
  • The problem of minimizing logical circuitry is called Boolean minimization, and one of the best techniques for solving small instances of this problem is the Karnaugh map. Such maps provide easily inspected, two-dimensional visualizations of Boolean functions and lead directly to a short formula for the function; the shorter the formula, the simpler the circuit.
  • The most efficient known algorithm for the minimum spanning tree algorithm happens to be a greedy algorithm. Such algorithms solve optimization problems by optimizing at each step.
  • The subject of recursion and fractals go well together. Recursion is the invocation of a computation inside an identical computation that is already in progress. Fractals are shapes that occur inside other, similar shapes.
  • Mathematically speaking, a recursive function is one that uses itself in its own definition. Programming languages are said to be recursive if they allow procedures that call themselves.
  • A perceptron is a special sort of neural net which examines and classifies visual patterns presented to it on a grid-like retina. It does this by weighing evidence submitted to it by a number of component devices, each of which looks only at a specified portion of the retina.
  • There is a class of logic circuits concerned mainly with the manipulation of information flow within computers. This class includes encoders, multiplexers, and related devices.
  • We may think of a computer's memory as consisting of thousands of registers, each register (or "word") consisting of many bits.
  • Encoders and decoders are opposites, in a sense. An encoder converts information arriving on 2^n input lines to n output lines, while the decoder does the reverse.
  • Multiplexers and demultiplexers are essentially switches which determine, for each state of a computer, which route data will travel between various components.
  • A demultiplexer or switch has a single input line and, under the direction of its control lines, selects one of its several output lines for transmission of the signal.
  • With encoders and decoders, multiplexers, and demultiplexers, virtually all the communication of information from one part of a computer to another can be managed.
  • Turing machines are the simplest and most widely used theoretical models of computing. Far too slow and unwieldy ever to be embodied in a real device, these conceptual machines nevertheless seem to capture everything we mean by the term computation. Not only do Turing machines occupy the top level of the Chomsky hierarchy, but also they seem capable of computing any function which is computable by any other conceptual scheme. Turing machines, moreover, are simpler than such schemes--from general recursive functions to random access machines.
  • When one is confronted with a computation problem involving a certain mathematical object X, it is sometimes possible to transform X to another object and to solve a (related) computational problem on the newly transformed object.
  • Individual neurons operate, like their biological counterparts, by a process of summation. Each neuron adds together all the signals that it receives, then transmits the sum to all neurons in the next layer. If it happens to reside in one of the medial layers, it will modify the sum of its signals in a special way before transmitting it.
  • A special, sigmoidal function squeezes the signal non-linearly into the interval from -1 to +1: The larger the sum is, the more closely it will approach +1 or -1, depending on its sign.
  • Sigmoidal functions enable neural networks to respond nonlinerarly to their environments.
  • In the RSA cryptosystem, two integers e and n are given as public keys. If a message m is converted to an integer less than n, then m may be encrypted according to the formula.
  • In the field of logic design, there is a fundamental distinction between combinational circuits and sequential circuits. The former can be represented by a simple black box with input lines on one side and output lines on the other. Inside the black box is a logic circuit composed of gates; every logical path in the circuit leads from an input to an output, and there are no circular logical paths. The latter kind of circuit contains two-state memory elements which can store or "remember" that state over a period of time.
  • In a combination circuit a given pattern will always give rise to the same output, but in a sequential circuit this is no longer true: the current content of memory forms an additional input to the logic section, and the output may change from one occasion to the next, even with the same input.
  • Binary numbers from the basis of computer arithmetic because the elements of memory and logic all have two states that can be identified as 0 and 1.
  • The [execution] cycle has two parts. They are called fetch and execute. The fetch cycles gets the next executable instruction in the program currently running and loads it into the instruction register. This cycle itself is written as a special sequence of elementary machine operations called a micro-program. Each line of the micro-program is called a micro-operation and is written in a special replacement notation called register-transfer language.
  • Any message consisting of words can be encoded as a sequence of 0s and 1s. The latter symbols can be transmitted by wire, radio waves, or a variety of other means. In all cases the message is liable to corruption. A 0 can inadvertently be changed to a 1, or vice versa. Whatever the actual source of interference or "noise", as information theorists call it, the tendency of a bit to change can be laid at the doorstep of a creature I call the noise demon. With a certain probability, say p, the noise demon alters each bit being transmitted.
  • One way to fool the demon is to transmit three 0s for each 0 intended. Also, replace a 1 by three 1s.
  • No one has yet devised an algorithm for deciding in polynomial time whether a number is prime.
  • A reasonable stopping rule [for primality testing] has been suggested; quite moderate values of k are sufficient to guarantee that the copying hardware is far more likely to fail than the algorithm!
  • The two major applications of coding techniques are protection and compression. When a message is encoded as a string of 0s and 1s, there are techniques by which the insertion of extra bits in the string to be transmitted enables the receiver of the message to discover which 0s or 1s (if any) are in error. In addition, there are methods for shortening the string so that not as many bits need to be transmitted.
  • The two major spheres of application in coding theory are time and space. Time applications are the traditional ones in which a message is transmitted electronically, over time, and the concern is to protect the message from errors. Space applications involve the protection or compression of data during storage in some electronic medium such as computer memories or magnetic tapes.
  • For every system there is a way in, for every virus there is a defense.
  • Broadly defined, an expert system applies a certain degree of reasoning ability (typically by the use of logic programming) to a set of rules and a data base that compromise what might be called expert knowledge. The user of such a system may ask it questions within the domain of the program's expertise and expect useful answers.

No comments:

Post a Comment