Why the RIAA curve became the industry standard — technology, politics, and economics
Why did RIAA become the standard?
Question answered on this page: Among the many EQ curves that existed, why did the RIAA curve become established as the sole industry standard? Was it because it was technically the best?
In short
The RIAA curve became the standard not because it was "the best curve," but because three conditions — technology, politics (industry consensus-building), and economics (hardware lock-in) — converged simultaneously in the 1950s.
Factor 1: technology — the arrival of the feedback cutter head
The widespread adoption of the feedback cutter head (Westrex 2A/2B, later 3D) in the early 1950s was the decisive change that made standardization technically feasible.
Earlier cutter heads had significant frequency response irregularities of their own. Recording engineers achieved their target recording curve by combining the cutter head's characteristics with the recording equalizer. In other words, different cutter heads required different equalizer settings, and variations in on-site practices made it likely for the resulting recording characteristics to vary as well.
The feedback cutter head used negative feedback to achieve a nearly flat frequency response in the head itself. This meant that the recording curve could be controlled by the recording equalizer alone, making standardization independent of the cutter head realistic for the first time.
→ For details on the feedback cutter, see blog post Pt.18, section 18.1
Factor 2: politics — cross-industry consensus-building
The RIAA curve was not the unilateral decision of a single company, but the result of a democratic process involving cooperation among multiple industry organizations.
NAB/NARTB (the broadcasters' trade association) had been working on standardization since 1942, accumulating experience and credibility in the field. AES (the Audio Engineering Society) proposed a standard playback curve in 1951 as a compromise among the various companies' curves, providing a forum for cross-industry discussion. And RIAA (the record manufacturers' trade association) organized a technical committee that brought together the chief engineers of the five major labels: Columbia, RCA Victor, Decca, Capitol, and Mercury.
The memberships of these organizations overlapped, and consensus-building proceeded across organizational boundaries.
Another factor not to be overlooked is the experience of World War II. During the war, American audio engineers had built cooperative relationships under military auspices (such as the so-called "Sapphire Group"), and this trust became the foundation for consensus-building in postwar standardization.
→ The Sapphire Group and its relationship to AES: blog post Pt.16
→ The RIAA technical committee and the standardization process: blog post Pt.18
Factor 3: economics — hardware lock-in
Even with technical feasibility and industry consensus, these alone would not have made RIAA an irreversible standard. The decisive factor in RIAA becoming the sole de facto standard was that hardware was designed with RIAA as the assumption.
Recording side: stereo cutting equipment. When the stereo LP was introduced in 1958, the stereo recording systems used for cutting (the cutter head and recording amplifier combination) were designed with RIAA recording characteristics as the standard. Applying non-RIAA recording characteristics on this equipment, while not technically impossible, was impractical in production.
Playback side: the shift in consumer amplifiers. Monaural amplifiers of the late 1950s included multiple EQ positions (Columbia LP, NAB, AES, 78 rpm, etc.) to accommodate older records pressed before 1954, including 78 rpm discs. However, with the onset of the stereo era, amplifiers offering non-RIAA positions declined rapidly. By the mid-1960s, surveys of product catalogs and circuit diagrams confirm that 80–90% of consumer amplifiers had only the RIAA position.
A telling detail underscores this shift: some early stereo amplifiers applied non-RIAA EQ to the left channel only. In other words, the manufacturers judged that there was no need to apply non-RIAA curves to stereo records.
→ RIAA-based design of recording equipment: blog post Pt.19
→ The evolution of phono EQ positions in consumer amplifiers: blog post Pt.22
Summary: a chain of consensus and lock-in
The process by which the RIAA curve became established as the standard can be summarized as follows:
- The spread of the feedback cutter head made standardization independent of the cutter head technically feasible
- Building on wartime cooperation, the three organizations NAB, AES, and RIAA coordinated to form a cross-industry consensus
- Stereo-era cutting equipment and consumer amplifiers were designed with RIAA as the standard, making the choice irreversible through hardware
RIAA did not win as "the best curve." Rather, "the curve everyone could agree on" became the only option through a generational turnover in hardware. This is the more accurate understanding.
Related pages
- → When was the RIAA curve established? — timeline and background
- → Are all U.S. stereo LPs on the RIAA curve? — RIAA's dominance in the stereo era, examined from another angle
- → What curves existed before RIAA? — the era of diverse curves
- → In a Nutshell: from the postwar period to the birth of the RIAA curve — a quick overview of the whole process
Revision History
- April 8, 2026: Initial publication