The productivity paradox refers to the slowdown in productivity growth in the United States in the 1970s and 80s despite rapid development in the field of information technology (IT) over the same period. As highlighted in a widely-cited article by Erik Brynjolfsson, productivity growth slowed down at the level of the whole U.S. economy, and often within individual sectors that had invested heavily in IT, despite dramatic advances in computer power and increasing investment in IT. Similar trends were seen in many other nations. While the computing capacity of the U.S. increased a hundredfold in the 1970s and 1980s, labor productivity growth slowed from over 3% in the 1960s to roughly 1% in the 1980s. This perceived paradox was popularized in the media by analysts such as Steven Roach and later Paul Strassman. The concept is sometimes referred to as the Solow computer paradox in reference to Robert Solow's 1987 quip, "You can see the computer age everywhere but in the productivity statistics." The paradox has been defined as a perceived "discrepancy between measures of investment in information technology and measures of output at the national level."
Many observers disagree that any meaningful "productivity paradox" exists and others, while acknowledging the disconnect between IT capacity and spending, view it less as a paradox than a series of unwarranted assumptions about the impact of technology on productivity. In the latter view, this disconnect is emblematic of our need to understand and do a better job of deploying the technology that becomes available to us rather than an arcane paradox that by its nature is difficult to unravel. Some point to historical parallels with the steam engine and with electricity, where the dividends of a productivity-enhancing disruptive technology were reaped only slowly, with an initial lag, over the course of decades, due to the time required for the technologies to diffuse into common use, and due to the time required to reorganize around and master efficient use of the new technology. As with previous technologies, an extremely large number of initial cutting-edge investments in IT were counterproductive and over-optimistic. Some modest IT-based gains may have been difficult to detect amid the apparent overall slowing of productivity growth, which is generally attributed to one or more of a variety of non-IT factors, such as oil shocks, increased regulation or other cultural changes, a hypothetical decrease in labor quality, a hypothetical exhaustion or slowdown in non-IT innovation, and/or a coincidence of sector-specific problems.
Academic studies of aggregate U.S. data from the 1970s and 1980s failed to find evidence that IT significantly increased overall productivity. However, the 1990s saw evidence of a delayed IT-related productivity jump, arguably resolving the original paradox; the broader issue of what measurable factors best explain the dramatic productivity ups-and-downs of the past two hundred years, as well as whether the rate of productivity growth is more likely to increase or to decrease in the decades ahead, remains a subject of contentious study.
Several authors have explained the paradox in different ways. In his original article, Brynjolfsson (1993) identified four categories to group the various explanations proposed:
He explained the first two explanations as "shortcomings in research, not practice as the root of the productivity paradox." He then stated that "a more pessimistic view is embodied in the other two explanations. They propose that there really are no major benefits". Brynjolfsson explores these ideas in detail and poses the paradox as an economic problem: Do benefits justify past and continued investment in information technology?
Turban, et al. (2008), state that understanding the paradox requires an understanding of the concept of productivity. Pinsonneault et al. (1998) state that, to untangle the paradox, an "understanding of how IT usage is related to the nature of managerial work and the context in which it is deployed" is required.
One hypothesis to explain the productivity paradox is that computers are productive, yet their productive gains are realized only after a lag period, during which complementary capital investments must be developed to allow for the use of computers to their full potential.
Diminishing marginal returns from computers, the opposite of the time lag hypothesis, is that computers, in the form of mainframes, were used in the most productive areas, like high volume transactions of banking, accounting and airline reservations, over two decades before personal computers. Also, computers replaced a sophisticated system of data processing that used unit record equipment. Therefore, the important productivity opportunities were exhausted before computers were everywhere. We were looking at the wrong time period.
Another hypothesis states that computers are simply not very productivity enhancing because they require time, a scarce complementary human input. This theory holds that although computers perform a variety of tasks, these tasks are not done in any particularly new or efficient manner, but rather they are only done faster. Current data does not confirm the validity of either hypothesis. It could very well be that increases in productivity due to computers are not captured in GDP measures, but rather in quality changes and new products.
Economists have done research in the productivity issue and concluded that there are three possible explanations for the paradox. The explanations can be divided in three categories:
Other economists have made a more controversial charge against the utility of computers: that they pale into insignificance as a source of productivity advantage when compared to the industrial revolution, electrification, infrastructures (canals and waterways, railroads, highway system), Fordist mass production and the replacement of human and animal power with machines. High productivity growth occurred from last decades of the 19th century until the 1973, with a peak from 1929 to 1973, then declined to levels of the early 19th century.  There was a rebound in productivity after 2000. Much of the productivity from 1985 to 2000 came in the computer and related industries.
A number of explanations of this have been advanced, including:
Gordon J. Bjork points out that manufacturing productivity gains continued, although at a decreasing rate than in decades past; however, the cost reductions in manufacturing shrank the sector size. The services and government sectors, where productivity growth is very low, gained in share, dragging down the overall productivity number. Because government services are priced at cost with no value added, government productivity growth is near zero as an artifact of the way in which it is measured. Bjork also points out that manufacturing uses more capital per unit of output than government or services.
When computers for general business applications appeared in the 1950s, a sophisticated industry for data processing existed in the form of unit record equipment. These systems processed data on punched cards by running the cards through tabulating machines, the holes in the cards allowing electrical contact to activate relays and solenoids to keep a count. The flow of punched cards could be arranged in various sequences to allow sophisticated data processing. Some unit record equipment was directed by a wired control panel, with the panel being removable, allowing for quick replacement with another wired control panel.
In 1949 vacuum tube calculators were added to unit record equipment. In 1955 the first completely transistorized calculator with magnetic cores for dynamic memory, the IBM 608, was introduced.
The first computers were an improvement over unit record equipment, but not by a great amount. This was partly due to low level software used, low performance capability and failure of vacuum tubes and other components. Also, the data input to early computers used punched cards. Most of these hardware and software shortcomings were solved by the late 1960s, but punched cards did not become fully displaced until the 1980s.
Computers did not revolutionize manufacturing because automation, in the form of control systems, had already been in existence for decades, although computers did allow more sophisticated control, which led to improved product quality and process optimization. Pre-computer control was known as analog control and computerized control is called digital.
Credit card transactions now represent a large percentage of low value transactions on which credit card companies charge merchants. Most of such credit card transactions are more of a habit than an actual need for credit and to the extent that such purchases represent convenience or lack of planning to carry cash on the part of consumers, these transactions add a layer of unnecessary expense. However, debit or check card transactions are cheaper than processing paper checks.
Despite high expectations for online retail sales, individual item and small quantity handling and transportation costs may offset the savings of not having to maintain "bricks and mortar" stores. Online retail sales has proven successful in specialty items, collectibles and higher priced goods. Some airline and hotel retailers and aggregators have also witnessed great success.
Online commerce has been extremely successful in banking, airline, hotel, and rental car reservations, to name a few.
The personal computer restructured the office by reducing the secretarial and clerical staffs. Prior to computers, secretaries transcribed Dictaphone recordings or live speech into shorthand, and typed the information, typically a memo or letter. All filing was done with paper copies.
A new position in the office staff was the information technologist, or department. With networking came information overload in the form of e-mail, with some office workers receiving several hundred each day, most of which are not necessary information for the recipient.
Some hold that one of the main productivity boosts from information technology is still to come: large-scale reductions in traditional offices as home offices become widespread, but this requires large and major changes in work culture and remains to be proven.
It is well known by software developers that projects typically run over budget and finish behind schedule.
Software development is typically for new applications that are unique. The project's analyst is responsible for interviewing the stakeholders, individually and in group meetings, to gather the requirements and incorporate them into a logical format for review by the stakeholders and developers. This sequence is repeated in successive iterations, with partially completed screens available for review in the latter stages.
Unfortunately, stakeholders often have a vague idea of what the functionality should be, and tend to add a lot of unnecessary features, resulting in schedule delays and cost overruns.
By the late 1990s there were some signs that productivity in the workplace been improved by the introduction of IT, especially in the United States. In fact, Erik Brynjolfsson and his colleagues found a significant positive relationship between IT investments and productivity, at least when these investments were made to complement organizational changes.  A large share of the productivity gains outside the IT-equipment industry itself have been in retail, wholesale and finance. A major advance was computerized stock market transaction processing, which replaced the system that had been in place since the Civil War but by the last half of 1968 caused the U. S. stock market to close most Wednesday afternoons processing.
Acemoglu, Author, Dorn, Hanson & Price (2014) have revisited the issue to find that "there is...little evidence of faster productivity growth in IT-intensive industries after the late 1990s. Second and more importantly, to the extent that there is more rapid growth of labor productivity...this is associated with declining output...and even more rapidly declining employment." In fact, up to half of the growth of U.S. healthcare spending is attributable to technology costs. Additionally, computers and mobile phones are continually cited as the greatest reducers of workplace productivity by means of distraction.