Why did the U.S. and Europe use different EQ curves — the divergence in turnover frequencies and its background

Last updated: April 8, 2026 Reading time: approx. 5 min

Why did the U.S. and Europe use different EQ curves?

Question answered on this page: Why did U.S. and British/European labels use different EQ curves? Where did the differences lie, and how did they eventually converge?



The key difference: turnover frequency

The most fundamental difference between U.S. and European EQ curves lay in the turnover frequency.

  • U.S. — From the early 1930s, the major labels moved toward a turnover of approximately 500 Hz
  • U.K./Europe — A turnover around 250 Hz was maintained comparatively long, into the 1950s

The turnover frequency is the boundary where the low-frequency recording switches from constant-amplitude to constant-velocity behavior. A higher turnover frequency results in a relatively lower cutting level in the bass, which reduces groove excursion and allows for longer playing time.


Why did they diverge?

Several factors contributed to this divergence.

On the U.S. side: the influence of the Bell Labs / Western Electric system

In the U.S., the electrical recording system developed by Bell Labs / Western Electric was supplied to Victor and Columbia beginning in 1925. By the early 1930s, the transition to a 500 Hz turnover was already underway, as evidenced by Victor test records from around 1931–1932.

In the field of broadcast transcription discs, recording characteristics with a 500 Hz turnover plus high-frequency pre-emphasis began to be adopted in the late 1930s. The 1942 NAB standard formally codified this trend.

On the European side: compatibility with mechanical gramophones

In Europe — particularly in the U.K. — mechanical gramophones (acoustic playback devices without electrical amplification) were still widely used across a vast market that included the colonies.

Raising the turnover frequency lowers the bass cutting level, which further weakens the bass when played on a mechanical gramophone. For this reason, European labels took a conservative approach, maintaining a turnover around 250 Hz for an extended period.

Differences in shellac composition and groove spacing conventions may also have contributed to the divergence between the two regions.


Differences in high-frequency pre-emphasis

Beyond the turnover, there were also differences in high-frequency pre-emphasis.

The U.S. NAB standard (1942) specified a strong pre-emphasis of 100 μs (±16 dB at 10 kHz), but such strong pre-emphasis was not common in the U.K.

A.E. Barrett of the BBC pointed out at a 1949 NAB committee meeting that the NAB curve's pre-emphasis was far more extreme than BBC equipment could accommodate. The BBC used a curve of approximately +10 dB at 10 kHz (corresponding to roughly 50 μs).


ffrr: Decca's innovation in the U.K.

A notable exception in the U.K. was Decca's ffrr (full frequency range recording).

The ffrr system, completed in 1944, was built around a motional-feedback cutter head developed by Arthur Haddy. It achieved a wider high-frequency response than was typical of recordings at the time, representing an important innovation in British recording technology.

The ffrr recording characteristics initially used a 500 Hz turnover only, and later a combination of 500 Hz turnover + 50 μs high-frequency pre-emphasis — a different combination from the dominant U.S. curves. British Decca records distributed in the U.S. under the London label were recorded with these ffrr characteristics, meaning U.S. listeners needed a dedicated playback curve setting.


Convergence: unification of standards

The differences between U.S. and European curves were resolved in stages.

In the U.S., when stereo LPs were introduced in 1958, cutting equipment was designed with the RIAA curve as the standard, leaving no room for any other curve in stereo LP production.

In the U.K., BS 1928:1955 adopted characteristics identical to RIAA (3180/318/75 μs) in 1955.

On the European continent, CCIR Interim Recommendation No. 208 (1956) defined a curve with turnover and bass shelf identical to RIAA, but with a high-frequency pre-emphasis of 50 μs (compared to RIAA's 75 μs). Germany codified this as DIN 45536/45537 (1959). This difference in the treble was resolved in 1962 when CCIR/DIN changed to 75 μs, becoming fully identical to RIAA.

Meanwhile, recording equipment used in Europe from the mid-1950s onward (such as the Neumann LV60) included a switch between RIAA and CCIR/DIN settings. The U.S.-made Westrex RA-1514 recording equalizer also had a variant with CCIR/DIN support, presumably intended for export to Europe. These were single-channel units designed for mono, but they may have been used in pairs for stereo operation. It is therefore not possible to rule out the existence of European stereo LPs with CCIR/DIN characteristics. At a minimum, stereo LPs conforming to DIN 45537 (500 Hz turnover, 50 μs high-frequency pre-emphasis, 3,180 μs bass shelf) may have existed until DIN 45547 — equivalent to RIAA — was adopted in November 1962. However, since this series focuses on the history of EQ curves in the U.S., this topic is not explored further here.


Scope of this series

This series (Pt.0–Pt.25) focuses primarily on the history of EQ curves in North America. British and European developments are discussed where they intersect with North American history, but are not covered comprehensively.

For those seeking a more detailed account of European recording technology history, the work of Peter Copeland (former BBC engineer and conservation manager at the British Library Sound Archive) is a highly regarded resource.


What EQ curves existed before RIAA?

When and why was the RIAA standard established?

For details → Pt.9, Pt.10, Pt.25


Back to FAQ

Revision History

  • April 8, 2026: Initial publication