A new paradigm for technological development has risen to prominence in the 21st century.
Dec 16, 2015
If you’ve taken Economics 101, then you have almost certainly come across the Solow growth model, which describes economic production as a function of capital accumulation and labor supply. In its most basic form, the Solow model purports that a steady-state economy experiences no growth in output per worker, and thus no growth in average income; however, as technology is added to the picture, it becomes clear why this is not the case in practice. Rather than quantifying labor supply as a number of workers, an augmented version of the Solow model defines labor in units of a new kind—a measure dependent on both the number of workers and their efficiency. The latter variable describes how productive workers can be at any given level of capital. Growth in labor efficiency has experienced two major epochs over the last century, the later of which is still in its juvenile years. During the first epoch, at the dawn of the computer age, efficiency growth came about in the form of automation, where engineers would design computer programs to perform the simple tasks of unskilled labor. More recently, however, the availability of Big Data, alongside groundbreaking A.I. technology, has brought about a whole new order of growth.
In the beginning, increasing labor efficiency was about automating simple tasks that had previously required unskilled human labor. For example, consider Alan Turing’s original computer, the electromechanical device that was used to decipher cryptographic keys for the German-designed Enigma machine. Before Turing came along, a team of manual laborers was tasked with the job of trying out as many keys as possible each day by hand, attempting to uncover the correct one before it was reset by the Germans at midnight for the following day’s messages. Turing’s machine was able to test combinations automatically, harnessing the efficiency of electromechanics to improve upon the hand-trial method. Similarly, factory assembly lines experienced a turn-around with the invention of industrial robots, programmable machines that could perform the same assembly line tasks as factory workers with greater precision and efficiency. The arrival of industrial robots meant that business managers could replace the variable cost of wages with an upfront, fixed capital cost.
The computer revolution brought about a drastic shift in global labor markets. Machines freed the hands of those previously occupied with simple, unskilled tasks, allowing them to move on to other work without a trade-off in labor supply. A new demand came about for workers who could design and program the machines that would augment human labor. A key component of all computer technologies that emerged in this era was that they were rule-based systems; that is, a team of programmers would put commands in a box, and the commands would be executed using a machine that maneuvers information at the speed of light. It was the job of engineers to decide upon these rules, handcrafting knowledge items that could be applied to a particular domain of interest. Computer vision engineers, for example, designed software that enabled industrial robots to make decisions based on visual feeds from cameras placed on their arms. It was the job of these engineers to determine what types of attributes the robots should look for in their visual data, and to build a decision model based on these attributes.
After a time, technologists began to wonder: if it is possible for engineers to design systems that replace the work of unskilled laborers, then perhaps it is also possible to design systems that replace the work of engineers. The Solow model uses a variable, ‘g’, to denote the growth rate of an economy’s worker efficiency. We can say that engineers help maintain a constant value for g over time, thereby enabling steady long-term growth in output per worker. Might it be possible to facilitate steady growth in g? If so, what exactly might be required to accomplish this feat?
A new paradigm for technological development has risen to prominence in the 21st century. Rather than handcrafting knowledge representations, technologists have shifted toward creating algorithms that learn, often from raw perceptual data. For example, instead of engineering visual features to be programmed into a factory robot’s knowledge base, the new approach is to design algorithms that develop such features autonomously via an exposure to data. By designing these learning algorithms, we are thus designing systems that replace the work of engineers. Such pattern-recognition technologies lie at the intersection of computer science, neuroscience and mathematics.
In a free market, capital will continue to flow toward technological systems that improve themselves without the intervention of human labor. This form of second-order growth, or growth in efficiency growth rate, offers the highest economic return, promising an optimal usage of capital and labor resources. The race is on as researchers around the world continue to hunt for the master learning algorithm.