Table of Contents
References & Edit History Related Topics

Alloying

Alloying elements are added to steels in order to improve specific properties such as strength, wear, and corrosion resistance. Although theories of alloying have been developed, most commercial alloy steels have been developed by an experimental approach with occasional inspired guesses. The first experimental study of alloy additions to steel was made in 1820 by the Britons James Stodart and Michael Faraday, who added gold and silver to steel in an attempt to improve its corrosion resistance. The mixtures were not commercially feasible, but they initiated the idea of adding chromium to steel (see below Stainless steel).

Hardening and strengthening

The first commercial alloy steel is usually attributed to the Briton Robert F. Mushet, who in 1868 discovered that adding tungsten to steel greatly increased its hardness even after air cooling. This material formed the basis of the subsequent development of tool steels for the machining of metals.

About 1865 Mushet also discovered that the addition of manganese to Bessemer steel enabled the casting of ingots free of blowholes. He was also aware that manganese alleviated the brittleness induced by the presence of sulfur, but it was Robert Hadfield who developed (in 1882) a steel containing 12 to 14 percent manganese and 1 percent carbon that greatly improved wear resistance and was used for jaw crushers and railway crossover points.

The real driving force for alloy steel development was armaments. About 1889 a steel was produced with 0.3 percent carbon and 4 percent nickel; shortly thereafter it was further improved by the addition of chromium and became widely used for armour plate on battleships. In 1918 it was found that this steel could be made less brittle by the addition of molybdenum.

The general understanding of why or how alloying elements influenced the depth of hardening—the hardenability—came out of research conducted chiefly in the United States during the 1930s. An understanding of why properties changed on tempering came about in the period 1955–1965, following the use of the transmission electron microscope.

Microalloyed steels

An important development immediately after World War II was the improvement of steel compositions for plates and sections that could readily be welded. The driving force for this work was the failure of plates on the Liberty ships mass-produced during the war by welding, a faster fabricating process than riveting. The improvements were effected by increasing the manganese content to 1.5 percent and keeping the carbon content below 0.25 percent.

A group of steels given the generic title high-strength low-alloy (HSLA) steels had the similar aim of improving the general properties of mild steels with small additions of alloying elements that would not greatly increase the cost. By 1962 the term microalloyed steel was introduced for mild-steel compositions to which 0.01 to 0.05 percent niobium had been added. Similar steels were also produced containing vanadium.

The period 1960–80 was one of considerable development of microalloyed steels. By linking alloying with control over temperature during rolling, yield strengths were raised to almost twice that of conventional mild steel.

Stainless steels

It is not surprising that attempts should be made to improve the corrosion resistance of steel by the addition of alloying elements, but it is surprising that a commercially successful material was not produced until 1914. This was a composition of 0.4 percent carbon and 13 percent chromium, developed by Harry Brearley in Sheffield for producing cutlery.

Chromium was first identified as a chemical element about 1798 and was extracted as an iron-chromium-carbon alloy. This was the material used initially by Stodart and Faraday in 1820 in their experiments on alloying. The same material was used by John Woods and John Clark in 1872 to make an alloy containing 30 to 35 percent chromium; although it was noted as having improved corrosion resistance, the steel was never exploited. Success became possible when Hans Goldschmidt, working in Germany, discovered in 1895 how to make low-carbon ferrochromium.

The link between the carbon content of chromium steels and their corrosion resistance was established in Germany by Philip Monnartz in 1911. During the interwar period, it became clearly established that there had to be at least 8 percent chromium dissolved in the iron matrix (and not bound up with carbon in the form of carbides), so that on exposure to air a protective film of chromic oxide would form on the steel surface. In Brearley’s steel, 3.5 percent of the chromium was tied up with the carbon, but there was still sufficient remaining chromium to confer corrosion resistance.

The addition of nickel to stainless steel was patented in Germany in 1912, but the materials were not exploited until 1925, when a steel containing 18 percent chromium, 8 percent nickel, and 0.2 percent carbon came into use. This material was exploited by the chemical industry from 1929 onward and became known as the 18/8 austenitic grade.

By the late 1930s there was a growing awareness that the austenitic stainless steels were useful for service at elevated temperatures, and modified compositions were used for the early jet aircraft engines produced during World War II. The basic compositions from that period are still in use for high-temperature service. Duplex stainless steel was developed during the 1950s to meet the needs of the chemical industry for high strength linked to corrosion resistance and wear resistance. These alloys have a microstructure consisting of about half ferrite and half austenite and a composition of 25 percent chromium, 5 percent nickel, 3 percent copper, and 2 percent molybdenum.