Yeast & Fermentation

This post endeavors to briefly illuminate a particularly minuscule organism that since the dawn of mankind has exerted considerable influence over the human condition.  Found in the dirt, air and water some yeast also subside naturally – inside all vegetation, animals and humans.  All fungi are parasitic or saprophytic and cannot manufacture their own food.  Since yeast are fungi and all fungi are heterotrophs that live on preformed organic matter some yeasts have been using mankind for far longer than he has been using them.   To state that mankind has domesticated yeast for thousands of years is probably an erroneous statement.  Whether he knew it or not however mankind has been exploiting these individually invisible microorganisms for his own benefit for perhaps ten millennia or more.  The historic relationship between brewing and baking is more intertwined than most readers may appreciate.  Today yeasts are also used to produce food additives, vitamins, pharmaceuticals, biofuels, lubricants and detergents. The more one learns, the more his appreciation grows for these seemingly simple little life forms.  It doesn’t take a degree in organic chemistry or molecular biology to put these little critters to productive work.

Yeasts are more evolutionary advanced microorganisms than say prokaryotic organisms like viruses and bacteria.  Prokaryotes don’t have a nucleus.   Higher life forms like onions, grasshoppers, humans and yeasts are eukaryotes which means their cells store genetic information within a nucleus.  Simpler and more basic than human cells and easier to work with, bread yeast (Saccharomyces cerevisiae) was the first eukaryotic organism to have its genome be fully sequenced.  A genome is the hereditary information stored in an organism – the entire DNA/RNA sequence for each chromosome.

The S. cerevisiae yeast genome possesses something like 12 million base pairs and 6,000 genes compared to a more complex human genome with 3 billion base pairs and 20,000 -25,000 protein coding genes.  Although sequencing has become easier in recent times, 18 years ago the thorough examination of Saccharomyces cerevisiae’s (beer yeast) genome was no simple task.  That project inspected millions of chromosomal DNA arrangements, involved the efforts of over 100 laboratories and was finally completed in 1996 after seven years of hard work.

* The 6th eukaryotic genome sequenced was also a yeast (Schizosaccharomyces pombe – in 2002) and it contained 13.8 million base pairs. 

The mentioning of this 1st accomplished genome sequencing is significant because it was to cause an upheaval in the current accepted classification of yeast species.  There are probably a great number of yet undiscovered yeast species in the wild but presently only a small percentage (between 600 and 1,500 species depending upon your source of information) are currently cataloged.  One of the more important fungi in the history of the world, the classification of Saccharomyces cerevisiae species is very much in a malleable state of flux.  You may read about the many types of bread yeast, or the hundreds of “varieties” of beer yeast or the hundreds of “strains” of wine yeast – but for the most part these share the same DNA and therefore must be considered the same species.   With beer and especially with wines the choice of yeast (strain or variety and species where applicable) can profoundly influence the outcome of the beverage’s flavor profile.

Bad fungus

“Almost all yeasts are potential pathogens” but none of the Saccharomyces species or close relations have been associated with pathogenicity toward humans.   “Candida and Aspergillus species are the most common causes of invasive fungal infection in debilitated individuals”, with 6 species (Candida: albicans, glabrata, krusei, neoformans, parapsilosis & tropicalis) accounting for about 90% of those infections.

Other multi-cellular (non-yeast) fungi affect humanity in various ways: Trichophyton rubrum and / or Epidermophyton floccosum bring us athlete’s foot, ringworm, jock itch and nail infection.  A member of genius Penicillium (with over 300 species) brings us a life saving antibiotic which kills certain types of bacteria in the body.   Claviceps purpurea or “rye ergot fungus” – if not immediately lethal or debilitating, brought us a mind altering alkaloid similar to LSD.  One of the more important negative influences fungi exercise upon us is in the capacity to destroy food crops.  

Domestication ?

A defining characteristic of domestication is artificial selection by humans.  Domestication means altering the behaviors, size and genetics of animals and plants.  These things were not done to yeast in antiquity.   Isolation of certain beneficial yeast strains was only beginning some 200 years ago, in breweries.  Only recently (by 1938) was one scientist was able to cross two separate strains of yeast and come up with a new one.  Although by the 1970’s scientist were beginning to mutate and hybridize yeast, it may be with the more recent attempts to engineer yeast to convert xylose (a wood sugar) into cellulosic ethanol that some additional yeast species can confidently be described as domesticated.  Even then “engineering” is a strong word.  Yeast mutate all the time without human help.  Scientist didn’t create a new fungus but started with examples that already decomposed dead trees or other cellulose containing plant material.  By attenuating the selection process for yeast with numerous cellulase enzymes, scientists hope to produce economical automotive fuel from sawdust and other normally wasted biomass.  The quest for an ideal yeast and bacterial biomass consuming combination is still ongoing.  This particular process defines artificial selection, not gene modification.

Right now, this very moment anyone can capture wild yeast from vegetable matter or from the very air to make bread or to ferment beer or wine.  In antiquity the women folk who cooked and then later bakers, brewers and tavern keepers likely kept a portion of a previous dough or barm yeast culture as a ‘starter’ simply to hasten the development of the next batch.  While this process might support claims of artificial yeast selection throughout history, one might also be reminded that sanitation during those bygone days was questionable and that exposure to wild yeast and bacteria was probably persistent.  It has always been easy to just whip up a new yeast culture from scratch, as will be explained shortly and as revealed in several recipes from a 120 year old cookbook.


Bread, Beer & Wine

The discovery or invention of wine, beer and bread were unavoidable and early man deserves no special intellectual credit for the achievement because omnipresent yeasts and bacteria did all the work.  Consider the cavewomen that picked a bountiful harvest of wild grapes and then carted these back home in animal skins or clay-lined baskets to be consumed later.  In a few days time wild yeast and bacteria would begin breaking down the fructose and glucose from juice released from crushed grapes at the bottom of any impermeable container.  The oldest available archeological evidence of a fermented beverage comes from 9,000 year old mead (honey wine) tailings found in northern China.  Here probably someone had originally, unknowingly enabled the enzymes from yeast to work by adding water to get all the sticky honey out of a container.  Likewise the inescapable discovery of bread and beer are no mystery.  Raw fresh grain is soft and easily chewable foodstuff.  Dried grain is next to impossible to chew so ancient man was soon mashing it between two rocks to make the powder called flour.  Dry flour is not very tasty so the next obvious experiment would be to add water and later perhaps to cook the gruel in a fire – eventually inventing bread.  Obviously the first breads were probably flat breads.  The proper leavening of bread actually requires several hours of rest for fermentation to create carbon dioxide bubbles which get trapped in gluten to make bread rise.  Had someone boiled a wet soup from the flour instead and then abandoned it because it wasn’t very good, it would have eventually turned into a beer in a few days.  Perhaps the first beer or ale resulted simply from someone’s bread falling into a pot of water.  Regardless, our encounter with fermentation and the invention of both bread and alcoholic beverage was inevitable.

Briefly, Saccharomyces cerevisiae (or sugar fungus) is typical of many yeast species but is a particularly successful species because it can live in many different environments.  Few of the other 64,000 or so members in the Ascomycota fungal phylum can reproduce both sexually and asexually while also being able to break down their food through both aerobic respiration and anaerobic fermentation – all at the same time.

budding yeast

 Under favorable conditions most, but not all yeasts reproduce asexually by budding where one cell splits into two.  On average a particular yeast cell can divide between 12 and 15 times.  In a well controlled ferment aerobic (with oxygen) respiration allows “sugar fungus” yeast cells to reproduce or double about every 90 minutes.  During respiration carbohydrates donate electrons, allowing cell growth, CO2 and water (H2O) production.   During anaerobic fermentation carbohydrates undergo oxidization while ethanol and CO2 are produced.  One yeast cell can ferment approximately its own weight in glucose per hour.  Favorable ferment conditions in this context imply moisture, mineral nutrition, a neutral or slightly acidic pH environment and a narrow temperature range of 50° F to 99° F.  Most yeast cells are killed at temperatures above 122°F.

* (No yeast yet known is completely anaerobic nor is fermentation necessarily restricted to an anaerobic environment).

Under harsh or unfavorable conditions yeasts like S. cerevisiae can become dormant and reproduce sexually by producing spores.  Spores can survive for hundreds of years, perhaps indefinitely, and like many other infinitesimal items can remain airborne for years before coming back into contact with the surface of the earth.  Anyone questioning this assertion should have a look at Lyall Watson’s book, titled “Heaven’s Breath: A Natural History of the Wind”.


A typical yeast cell measures about 3–4 µm (microns or millionth of a meter) in diameter.   Dry packaged yeast as imaged above can survive a long time when refrigerated.  The 3 large bakers yeast packages pictured at the bottom are labeled as containing 21 grams of yeast each.  The 3 brewers yeast packages on top are labeled 5 grams.   Compressed yeast which would contain less yeasts per gram because less water has been removed, is estimated to contain between 20 and 30 billion living organisms – per gram.  The physical volume of that gram would be about the size of a pencil eraser.


In general, bacteria are to be avoided during normal food and beverage production, but as usual there are exceptions.   Many of the approximate 125 species of lactobacillus bacteria are closely associated with food spoilage.  Without the assistance of beneficial bacteria (several of which are lactobacillus members) however we would have no vinegar, chocolate, cider, cheese, kim-chi, pickles, sauerkraut, sourdough bread or yogurt.  Bacteria can drive fermentation by themselves.  More preferably, certain beneficial bacteria can assist yeasts in the fermentation reaction for breads, beers or wines and are sometimes deliberately used to do so.


In baking or brewing it is the enzymes that yeasts or bacteria possess or produce which catalyze chemical reactions and drive fermentation.  A mixture of enzymes might be needed to successfully break down complex longer chained carbohydrates, before either bread leavening or ethanol production is achieved.  In alcoholic fermented beverages, enzymes might be acquired from sources beyond yeast and bacteria, such as from human saliva where for a thousand years descendants of the Incas have chewed maize and spit into common vats to produce the wine called “Chicha”.  The rice wine “Sake” is made with the help of enzymes from a (non yeast) fungus mold named Aspergillus oryzae.  The enzymes used to create the Mongolian horse milk wine known as “Ayrag” or “Kumis” came from the lining of a bag sewn from a cow’s stomach.   There are far too many types of enzymes to list here but the names of some important ones often end in the suffix “ase” (as in: lactase, saccharase, maltase, alpha amylase or diastase, zymase or invertase and alpha-galactosidase).    

Sugar or starch

To briefly outline and oversimplify a topic that deserves more attention: there are many names for, and many types of, starches and sugars and enzymes needed to break them down.  There are simple sugars, complex sugars and very complex sugars or conversely one could say: ‘there are: monosaccharides, disaccharides, oligosaccharides, and polysaccharides’.  Glucose (or dextrose), fructose (or levulose), galactose, and ribose are monosaccharides and examples of the simplest sugar molecules.  Two monosaccharides are found combined in a disaccharide – as in sucrose, lactose or maltose.  Table sugar is almost pure sucrose.  An enzyme like zymase (also called invertase or a dozen other names) is needed to split sucrose into two mono or simple sugar molecules (glucose and fructose) before fermentation of ethanol and CO2 can commence.  Oligosaccharides generally contain anywhere between 3 and 9 monosaccharides.  Polysaccharides are even longer, linear or branched polymeric carbohydrates and may sometimes contain thousands of monosaccharides.  Starch and cellulose are examples of polysaccharides.

Sugarcane was originally indigenous to Southeast Asia and was slowly spread by man to surrounding regions.  In ancient times sugar was exported and traded like a valuable spice or medicine – not as a food commodity.  There was some spread of sugarcane cultivation in the medieval Muslim world but otherwise cultivation did not blossom until the 16th century when colonials reaped their first sugar harvest in the New World (Brazil and the West Indies or Caribbean Basin).  Sugar from sugar beets was never realized until a German chemist noticed that the beet roots contained sucrose.  The first refined beet sugar commodity appeared around 1802.



Leaven” is the ancient equivalent term for yeast and it caused bread to rise.  Leaven was mentioned in the Bible when Moses led the Israelites out of Egypt, and where they all left in a hurry without waiting for their bread to rise.  Flat, unleavened, unremarkable bread is served during Passover, which is not a Jewish feast or celebration but a remembrance of deliverance, simplicity, haste, and powerlessness.  “Yeast” is a younger word with roots from Indo-European and Old English words meaning surface froth, bubble, foam and boil.  In times past and probably for many centuries, housewives and or cooks usually made both bread and beer on a frequent basis, from a leaven-yeast starter that they maintained in the kitchen.  In both Medieval Europe and colonial North America many households also maintained a constant supply of “small beer” on hand for servants and children or for general consumption.  Small beer had low alcohol content but some taste and since it was pasteurized it was usually much safer to drink than the local water.  Two centuries ago some children drank small beer with breakfast just like today’s children might drink orange juice.

Almost all bread before the 1840s was probably a form of sourdough bread.  Without the help of either bacteria or refined sucrose, S. cerevisiae yeast alone cannot properly break down the starches (polysaccharides or carbohydrates) in flour, work its fermentation or cause bread to rise.  In the early 1800s, for the fist time, collective bakers began making sweet breads (as opposed to sour) by using bottled yeast skimmed off and collected from ale (beer) vats.  This renaissance in baking quickly spread outwards from Vienna, Austria.  In general, bakers started buying top-fermenting beer yeast from brewers.  Initially the yeasts were collected by skimming barm or krausen off the top of a beer vat and putting it into bottles.  In about this same time frame another renaissance or revolution was occurring in the beer world.   German brewers were learning to make lagers, which employed different (bottom dwelling) yeast and much cooler and longer fermentation periods.  At the time lagers were a taste sensation and considered a great improvement over the heaver ales.  With many brewers ‘changing horses in mid stream’ to use different yeast and processes in order to jump on the lager bandwagon, bakers in Vienna and elsewhere were left without convenient sources of sweet yeast.  To fill that void ‘press yeast’ was developed.  The forerunner of modern baker’s yeast, press yeast was first skimmed from the top of a dedicated grain mash and washed and drained carefully before being squeezed in a hydraulic press.  Modern baker’s yeast have pretty much been selected for optimum carbon dioxide production.  Such yeast would still make good ale.  Bread dough makes alcohol while fermenting but that escapes when it is baked.

* The grains corn and rice have no gluten.  To make breads with these grains rise, flour with gluten must be added. 

* “Quick breads” like biscuits, pancakes, bannock, scones, sopapias and cornbread are made with “self rising flour” or regular flour with the help of a baking power.  Self rising flour merely contains its own baking powder.  Baking powder is a mixture of soda, acid salts and starch (which helps keep the other two ingredients inactive).  Baking powder is basically a little bomb, a little electrochemical reaction for making gas bubbles; waiting only to be triggered by the addition of liquid.  


Sourdough bread

Sourdough is a vague term.  There are many ways to create a sourdough starter.  While the name implies a sour taste due to contribution of bacteria and / or wild yeast, some sourdoughs taste little different than normal commercial sweet bread.  Some sourdough starter recipes actually call for baker’s yeast to be used while others might begin with pineapple juice, potatoes or even yeast captured in an opened can of beer left on the kitchen counter top for about a week.  A characteristic practice of sourdough bread making is that a portion of the ‘sponge’ is to be retained after each dough batch and is stored in a cool place to be used as the next starter.  ‘Sour mash’ whiskey has the same connotation – part of the original yeast and enzyme culture is retained and used in the next batch – maintaining consistency of product.   In brewing “re-pitching” the yeast is similar to using a sourdough starter; a portion of the live yeast from the bottom or top of a wine must or grain mash is saved to be reused again.

In the 1840s as the first Bavarian lager technology was reaching America, gold miners were about to congregate in the California Gold Rush.  San Francisco is a modern bastion of sourdough bread patronage with some restaurants or bakeries claiming to have maintained the same starters since the Gold Rush days.  One species of lactic acid bacteria found in some sourdough is actually named after the city: Lactobacillus sanfranciscensis.  Also these starters might include species of yeast (like Saccharomyces exiguous or Candida milleri) that can leaven bread by working on polysaccharides instead of simple sucrose.

Homemade yeast

While fresh compressed yeast was becoming common in the urban food markets of Europe and America by the 1870’s, many individuals (especially those in remoter areas) simply made their own yeast.  The “White House Cook Book” was an authoritative publication ((c)1887 and before) used by ambitious housewives across the country.  The book gives several recipes for starting a yeast culture, including the use of milk or salt and even drying the yeast into cakes for later use.  One of the book’s recipes for yeast is simply titled “Unrivaled Yeast” and it resembles the following (actual recipe is on p.242):

- boil 2 oz. of hops in 4 qts. of water for 30 minutes, strain and let cool

- mix this water in large bowl with 1 qt flour, ½ cup salt and ½ cup brown sugar –let stand for 3 days

- mix this with 6 boiled and mashed potatoes – let stand for another day, stirring frequently.  

- ready to use or to be stored in bottles for future use (good if kept cool for about 2 months).

Obviously the yeasts native to the potatoes were killed by boiling, so yeasts from the atmosphere and perhaps flour as well were the ones captured.  Sanitation and sterilization of utensils was and still is important to limit the procreation of undesirable bacteria.   Hops (flowers of the Humulus lupulus plant) are frequently mentioned in these older recipes because hops which were also used as herbal medicine, act as an antiseptic \ antibiotic preservative by inhibiting bacterial growth but not beneficial yeast growth.

* The Reinheitsgebot or Bavarian Purity Law of 1487 – specified the use of only water, barley and hops – for the brewing of beer.   The contribution of yeast was not appreciated but the antibacterial benefits and virtuous bitter flavor components of hops were.  Evidence suggests that hops were being used in Bavarian beer as early as 736 in an abbey outside Munich.  The Reinheitsgebot also had the effect of discouraging competing imported Belgian beers which preferred to use gruit and of preserving the wheat harvest for those needing to bake bread for food. 

There are many, many other interesting facts to discuss about yeast, enzymes and bacteria in regards to fermentation but this post has to draw a conclusion or come to an ending somewhere.  No more time will be taken to examine yeast killing sulfides in wine, the alcohol tolerance of different yeasts, turbo yeast or how Champagne is created by secondary fermentation.  Somehow it seems that yeasts have used us just as much as we have used them.  We have changed their nature little – if at all.  For the small percentage of yeast species we have identified, we are on the verge of understanding the true nature of just a few.

Water Turbines


The Egyptians were using mechanical energy to lift water with a wheel in the 3rd century BC.   Four hundred years later in the 1st century AD Greek, Roman and Chinese civilizations were using waterwheels to convert the power of flowing water into useful mechanical energy.   The word “turbine” was coined from a Latin word for “whirling” or “vortex”.  The main difference between a water wheel and a water turbine is usually the swirl component of the water as it passes energy to a spinning rotor.  Although the Romans might have been using a simple form of turbine in the 3rd century AD, the first proper industrial turbines began to appear about 200 years ago.  Turbines can be smaller diameter for the same power produced, spin faster and can handle greater heads (water pressure) than waterwheels.  Windmills and wind turbines are generally differentiated by the reasoning that windmills turn wind-power into mechanical energy whereas ‘wind turbines’ convert wind-power into electricity.  This post attempts to reveal to those individuals with an exploitable water source that – modest advancements in ‘micro’ hydro technology have made it feasible for them to potentially create useful power from low water heads or from very modest water sources.


Above the horizontal undershot waterwheel requires the least engineering and landscaping labor to install; the width of the runner can be tailored to match the flow rate and only a small water ‘head’ is required.  The ‘breastshot’, ‘overshot’ and ‘backshot’ styled waterwheels get progressively more efficient.

Water head can be thought of as the weight of water in a static column.  Since fluids don’t compress, the weight of water in a pipe is directly related to its pressure at the bottom (measured as psi or pounds per square inch).  As a stream drops in elevation its head is a measurement of that drop.  Water weighs 62.427 lbs per cubic foot.  There are 1,728 cubic inches in a cubic foot.  A cube of water 12” high, 12” wide and 12”deep would have a psi of ((62.427 / 12) /12)  or 0.433 lbs. per square inch.   Any column of water 1 ft. high, regardless of width, still has a water head of 1 ft. and a psi of 0.433 lbs/in².   Water drop is simply multiplied by the constant 0.433 to determine the potential psi. 


Boyden turbine

A Frenchman named Fourneyron invented the first industrial turbine in 1827.  The idea was brought to America and improved upon in the form of the Kilburn turbine in 1842.  By 1844 a conical draft tube addition resulted in the Boyden turbine.  There were dozens of Boyden turbines in operation in northeast America by the time radical abolitionist John Brown raided Harper’s Ferry in 1859.   Located at the confluence of the Shenandoah and Potomac rivers, Harper’s Ferry was a national armory and a beehive of activity where gunsmiths made small arms.   In 1859 at least 2 Kilburn and 5 Boyden turbines were driving the  jack-shafts and belts needed to the power lathes, sawmills and other equipment necessary to keep 400 employes busy at the armory.

Fourneyron’s turbine and subsequent Kilburn and Boyden types were further followed themselves by increasingly efficient  turbines including:  the Leffel double turbine, John B. McCormick’s mixed-flow turbine, the New American and Special New American turbinesAll of these are known as outward flow reaction turbines (which are reminiscent of cinder, sand or fertilizer spreaders – but with water spraying out at the bottom).


A different type of turbine called an inward flow (or radial flow) reaction turbine was developed by James b. Francis in 1849.  In the snail shaped Francis turbine water is sucked into a spiraling funnel that decreases in diameter.  Used at the beginning of the 20th century mainly to drive jack-shafts and belts for machinery in textile mills, Francis type turbines soon became the type favored for hydroelectric plants and are the type most frequently used for that purpose today.  This <link to an image> apparently taken in Budapest before 1886 shows what looks to be a Francis turbine being installed in the vertical axis rather than the horizontal axis.


A “runner” is that part of a turbine with blades or vanes that spins.   As with any other turbine the scale of dimensions can be adjusted up or down to suit individual needs.   Although small Francis turbines are produced the ones used in large hydroelectric power stations are impressively huge – some producing more than a million horsepower each (1,341 hp = 1 Megawatt).   The largest and most powerful Francis type turbines in the world are in the Grand Coulee Dam (Washington USA).  The runners of the turbines there have diameters of 9.7 meters and are attached to generators producing as much as 820 Mw each.   China’s “Three Gorges Dam” is capable of the world’s largest electrical output however with 32 main generators producing an average 700Mw each for a total 22,500 MW optimum output.   Located between Brazil and Paraguay the world’s second largest dam (in terms of generating capacity) is the Itaipu dam with 20 Francis  turbines powering 700 MW generators.   In 2012 and 2013 Itaipu’s annual electrical output actually surpassed that of Three Gorges due to the amount of  rainfall and available water.


Another type of reaction turbine was developed by an Austrian in 1913 looks like a boat propeller.  Some windmills are called Kaplan turbines.  The blades or vanes on a Kaplan designed hydro turbine are adjustable, allowing the turbine to be efficient at different workloads or with varying water pressures.  Although complicated and expensive to manufacture, the Kaplan design is showing up more frequently around the world, especially in projects with low-head, high flow watersheds.  They can be found working in the vertical or the horizontal planes.  Large Kaplan turbines have been working continuously for more than 60 years at the Bonneville dam.  The Bonneville dam is on the Columbia River between Washington and Oregon, several hundred miles downstream from the Grand Coulee dam.  Both dams were started at the same time during the depression and were initiated by Roosevelt’s (FDR’s) “New Deal”.  Small inexpensive Kaplan turbines (without adjustable vanes) can be made to work in streams with as little as 2 feet of head.


The so called “Tysonturbine looks like it could qualify as a Kaplan turbine but  this modern example of micro hydroelectric technology encases its own generator in a waterproof housing.  The unit is submerged into a stream and usually suspended from a small tethered raft.  The stream can be shallow but obviously a high flow rate will encourage the best electrical generation.

Yet another type of water turbine is tenuously referred to as a “crossflow turbine.   In the early 1900′s two individuals on opposite sides of the world independently contrived about the same turbine design.  A Hungarian professor named Banki and an Australian engineer named Mitchell invented turbines that combine aspects of both a reaction (or constant-pressure) turbine and an impulse (or free jet) turbine.  The runner of a Banki -Mitchell (or Ossberger)  crossflow turbine is cylindrical and resembles a barrel fan that one might find in a forced air furnace or evaporative swamp cooler.  The design uses a broad rectangular water jet that travels through the turbine only once but travels past each runner blade twice.  The moving water has two velocity stages and very little back pressure.


Most suited to locations with low head but high flow, low-speed cross flow turbines like this have a flat efficiency curve (the annual output is fairly constant and not as much affected by fluctuating water supply as are some other designs).  Large commercial crossflow turbines are manufactured that can handle 600 ft. of head and produce 2,500 hp.  Small homemade Banki – Mitchell units have been constructed that are capable of producing about 400 watts using a car alternator with 5.5 CFS (cubic feet/sec) of water from a stream with a head of only 33 inches.  These units can make considerable noise, so to keep vibrations minimized these turbines should be well balanced and spun at moderate revolutions per minute.


Two rising celebrities in the world of mini or micro hydroelectric technology are both impulse turbines.  The Pelton wheel or runner works in the vertical plane usually, and the somewhat similar Turgo in the horizontal.  Water pressure is concentrated into a jet that impacts spoon shaped cups of the Pelton or curved vanes of the Turgo.  These systems capitalize on high head, low flow water sources.  Turgo runners are sometimes quite small (like 3 or 4″ in diameter) and are designed to run at high speeds.   A small uphill water source and enough penstock (piping) to reach it are the main requirements necessary to make one of these small impact turbines useful.   Under the right circumstances a small Pelton or Turgo wheel of just a few inches in diameter is capable of producing perhaps 500 watts.   In the absence of running streams, snow pack or plentiful rainfall an individual living in a mountainous area might still be able to collect up-slope groundwater from perforated pipes buried in boggy areas, springs or the drainage ditches alongside roads.  A long run of water hose, polyethylene or polyvinyl chloride (PVC) pipe could conduct the water down slope, which would gain another pound per square inch of pressure for every 2.31 feet of drop.  Water catchment from barn and house roofs could be redirected to holding cisterns and used by these little turbines when appropriate to augment other alternative off-GRID power systems.


Built 1901 – used to power the mining town of Victor, CO. Courtesy of Gomez.

The Pelton wheel was patented in 1880 but Lester Allan Pelton actually got the idea from using and examining similar Knight water wheels in the placer mining gold fields of 1870′s California.  Employing fluid often diverted by sluices to a holding pond before being collected into a penstock and dropping further, miners washed entire hillsides away with jets of high pressure water.  The tip end of this water cannon was a nozzle called a “monitor” and there was no ‘off button’.  Most of these hydraulic mining monitors spewed water around the clock so it was probably just a matter of time before some enterprising miner attempted to convert that wasted energy into a useful mechanical energy by spinning a wagon wheel with pots and pans attached to its rim.  While ‘Knight wheels’ (the 1st impact water turbines) were originally constructed to power saws, lathes, planers and other shop tools some were actually used in the first hydroelectric plants built in California, Oregon and Utah.  Lester Pelton’s innovation was to extract energy more efficiently from a water jet by splitting the cup and deflecting the splash out of the way.


Between the 1870’s and the 1890’s innovations for both hydroelectric turbine and alternating current development were occurring at breakneck pace.  The first hydroelectric power schemes began to appear after 1878 onward and for several years created only DC current.  In three years between 1886 and 1889, the number of hydroelectric power stations in the U.S. and Canada alone quadrupled from 45 to over 200.  AC development milestones during this period include: step up and step down transformers, single phase, polyphase or triple phase AC, and great improvements in the distance of power transmission.   <This site> provides an interesting history and timeline on the maturation of AC power.

The Ames hydro electric power plant in Colorado claims to “be the world’s first generating station to produce and transmit alternating current”.   Perhaps that claim should be amended to specify only “AC for industrial use”.  Originally the Ames plant attached a 6 foot tall Pelton wheel to a Westinghouse generator.  The largest generator ever built up to that time, it made 3,000 volts, single phase AC @ 133Hz.  The Pelton wheel was driven by water from a penstock with a head of 320 feet.  The power was transmitted 2.6 miles to an identical alternator/motor, driving a stamp mill at the Gold King Mine.  The mine owners chose this newfangled electricity over steam powered machinery because of the prohibitive cost of shipping coal by railway.  In 1905 the Ames power plant was rebuilt with a new building, two Pelton wheels with separate penstocks from two water sources and a General Electric generator of slightly less output capacity.  After 123 years this facility’s impact turbines are still producing electricity.

The success of the Ames power plant along with a well done 1893 World’s Fair exhibit by Tesla and Westinghouse helped determine a victor in the famous “War of the Currents” and more immediately, who would win the prestigious Adam’s power station contract soon to be constructed at Niagara Falls.


The main characters in the ‘War of the Currents’ were (from left to right above) the DC proponents Thomas Edison and J.P. Morgan and their AC rivals Nikola Tesla and George Westinghouse.  Pride, patents, reputations and big money were at risk in this somewhat ridiculous conflict.  At its peak the quarrel was exemplified by Edison going about the country and staging demonstrations wherein he electrocuted old or sick farm & circus animals with ‘dangerous’ AC current.  It is rumored that the electric-chair used for executions was itself created due to a secret bribe from Edison.  In response Tesla staged some carefully controlled demonstrations where he shocked himself with AC to prove its safety.  In truth both DC and AC currents are potentially deadly at higher voltages, but AC may ‘win out’ slightly because its alternating fluctuation might induce ventricular fibrillation (where the heart looses coordination and rhythm).

* For those that may not know: Edison was a prominent inventor who formed 14 companies and held 1,093 patents under ‘his’ name although his formal education consisted of only 3 months schooling.  The largest publicly traded company in the world (General Electric) was formed by a merger with one of Edison’s companies.   JP Morgan was one of the most powerful banker/ financier/ robber barons in the world in the 1890′s.   JP reorganized several railroads, created the U.S. Steel Corporation, and bailed the government and U.S. economy out of two near financial crashes – once in 1875 and again in 1907.  He was also self conscious about his big nose and did not like to have his picture taken.   Recognized as a brilliant electrical and mechanical engineer Tesla never actually graduated from his university.  Immigrating to the U.S. in 1884, Tesla even worked for Edison before the two had a falling out.   Westinghouse attended college for 3 months before receiving the first of his 100 patents and dropping out.  He went on to found 60 companies.  

Although AC has been the favored method of current transmission for the last century, in the War of the Currents, DC power never fully capitulated.  Considering storage benefits, DC may someday stage a spectacular comeback.  In Cities like Chicago and San Francisco an old DC grid may run parallel to its AC complement.  Most consumer electronics convert AC into DC anyway.  DC offers some advantages over AC, including battery storage which provides load leveling and backup power in the event of a generator failure.  There is no convenient way to store excess AC power on the GRID so it is shuffled around for as long as possible.


Alternating current originally offered advantages over direct current in its ease of transmission.  High voltage / low current travels more efficiently in a wire than low voltage / high current will.  The introduction of the transformer (which works with AC but not DC) allowed AC to be “stepped up” to a higher voltage, transmitted and then stepped back down to usable power at the destination.  DC current (under the Edison scheme) had to be generated very close to its finial destination or otherwise use expensive and ungainly methods to achieve transmission over longer distances.  Voltage drop (the reduction of voltage due to resistance in the conducting wire) affects both currents equally.  Due to resistance, some power will be lost as heat during transmission.  AC suffers from a resistance loss during transmission that does not affect DC.  “Skin effect” is the tendency of AC to conduct itself predominately along the outside surface of a conductor rather than in the conductor’s core.  The whole wire is not being used – just the skin.  This skin effect resistance increases with the frequency of the current.  This phenomenon along with new technology for manipulating DC voltages has recently encouraged several companies to construct new High Voltage Direct Current (HVDC) power lines for long distance transmission.  The Itaipu Dam mentioned earlier for example transmits HVDC over 600 kV lines to the São Paulo and Rio de Janeiro – some 800 km away.

The huge dams built in the U.S. were not created to provide electrical power to customers but to control and redirect water for the purpose of agriculture.  Even today the bulk of power created by those dams is used to pump water back uphill so that it can be broadly distributed by irrigation.  In 2008 the U.S. Energy Information Agency (EIA) estimated that only 6% of the nation’s power was generated hydroelectrically and that amount has changed little in the last 5 years.  The EIA  does predict a growth in the future for photovoltaic and wind generated power.  Canada with a much smaller population supplies itself with a greater percentage of hydroelectric power than the U.S. and also has more kinetic energy available in terms of exploitable water resources.


- Wind turbines, water turbines, Archimedes screws and centrifugal pumps in reverse can be mounted to the same types of alternators or generators.  Small or miniature turbines can be affixed to a wide range of DC motors from tools, toys, treadmills, electric scooters, old printers, stepper motors and servos.  Commonplace AC induction motors from laundry machines, blowers, furnaces, ceiling fans, tools and other sources can be converted into brush-less low rpm alternators by rewiring or installing permanent magnets in the armature.  Usually but not always in a modest off-grid power scheme AC current from an alternator or magneto needs to be rectified into DC so that the energy can be stored in a deep cycle ‘battery sink’.  Automotive alternators contain their own rectification but these are less than ideal for turbines for a couple of reasons.  Charge controllers and inverters are also pertinent subjects in the discussion of alternate energy.  These topics may be addressed in some future post.  For now a final image (of a rectifying ‘full wave bridge’) and some miscellaneous video links are offered.


CIVIL 202 – Pelton wheel project -  3 minute video – school science project

Micro Hydro Electric Power- off grid energy alternatives – 7.5 minute video – something of an advertisement

Home Made Pelton Wheel – rather long 12 minute video

Turning Green In Oxford – 9 minute video / power by Archimedes screw

Algonquin Eco-Lodge – 8 minute video – generating by reversing water flow through centrifugal pump

Hot Stuff 2

Good bronze sculptures are still being made today.  source:Google free to use or share filter

Bronze & Brass

As the ancients toyed with fire they created glasses and ceramics and discovered several metals.  In antiquity only about 7 elements (gold, copper, silver, lead, tin, iron and mercury) were recognized as being metals.  Although their ores were used to create alloys – arsenic, antimony, zinc and bismuth were not determined to be unique metals until the 13th or 14th centuries AD.  With the addition of the discovery of platinum in the 16th century (ignoring the Incas who were apparently smelting it earlier) that makes a total of only 12 unique metals (out of 86 known today) mankind recognized before the 18th century AD.   Gold and copper were the first metals to be widely used.  Because of their low melting points the first metals to be smelted however might have been tin and lead.  Lead beads unearthed in Turkey have been dated to about 6,500 B.C.  Galena (lead ore) is fairly common and widely dispersed whereas tin oxide (as in cassiterite SnO2) is relatively rare.  Neither gold, copper, tin nor lead however were as influential in the course of human events as was bronze; a hard alloy which upon occasion has been mixed from all four simultaneously.

The progression of mankind’s technological advancement is broadly categorized into the Stone, Bronze and Iron Ages, the Renaissance, the Industrial Age and presently what some are pleased to refer to as the “Information Age”.  Erudite scholars from some future millennium however may look back and call ours simply – the “Plastic Age”.   The “Bronze Age” is subjective term depending upon location or culture but it implies the mining and smelting of copper that has been hardened with an alloy to make bronze weapons and tools.  The Bronze Age occurred around 3,150 BC (or BCE) in Egypt, 3,000 BC in the Aegean area, 2,900 BC in Mesopotamia and about 1,700 BC in China.   Gold, copper, lead (and silver to a lesser degree because it is found in the same galena ore which produces lead) metalworking dates from about 6,000 BC onward and so actually predates the so called ‘4th millennium BC – Bronze Age’.   The first gold and silver coinage comes much later in the 7th century BC in Lydian, Persian and Phoenician cultures.   The first exchangeable copper coinage might have appeared in Greek and Roman societies.

* Both copper and silver ions are germicidal and can inhibit or kill bacteria in water.   The ancient Egyptians employed copper medicinally.  Some modern hospitals incorporate copper in the form of water faucets and doorknobs.

Copper Alloys

We usually think of bronze as being an alloy of copper and tin and brass as being merely an alloy of copper and zinc but as usual the real picture is more complicated than that.  There are many alloys of copper that are called bronze or brass and occasionally the distinction is not clear.  There is tin bronze, leaded tin bronze, manganese bronze, silicone bronze, phosphor bronze, aluminum bronze, arsenic bronze and beryllium copper.   ‘Red brass’ copper alloy historically used for casting cannons actually contains about 5 times more tin than it does zinc.  Many modern coins (like the American dime, quarter and half dollar) have a copper core sandwiched between two layers of cupronickel (an alloy of 75% copper & 25% nickel).  The Swiss franc and American nickel (5 cent piece) are actually solid homogenous cupronickel.   There are a large number of distinctly recognized mixtures of brass as well.  ‘German silver’ (or nickel silver or nickel brass) is similar to afore mentioned cupronickel but contains about 2% zinc.  ‘Muntz metal’ (copper, zinc and a trace of iron) is an alloy thought up a couple of centuries ago to provide a cheaper ship hull protective sheathing than the copper one which it replaced.   ‘Nordic gold’ is an alloy of 89% copper, 5% aluminum, 5% zinc, and 1% tin – that is used in several Euro coins.

Two Egyptian bronzes source: Google free to us or share filter

Two Egyptian bronzes
source: Google free to us or share filter

The first bronzes were arsenic bronzes.  While this might be attributable to the fact that the copper ores then smelted usually contained some indigenous arsenic, at some point early metalworkers deliberately added more arsenic to make a harder alloy.  Arsenic bronze is much harder than the original – excessively ductile copper and allows for the creation of useful tools, weapons, body armor and sculptures that will stand up under their own weight.  At some later point in history, tin gradually supplanted arsenic as the preferred alloy for bronze.  Tin bronze is not harder or mechanically superior to arsenical bronze.  It is likely that tin ore (which was scarce and required many civilizations to acquire it by trade) produced an alloy that required less work hardening to produce a sharp sword or an alloy that would fill a casting faithfully without leaving voids.  Arsenic sublimates (does not melt) at a temperature lower than molten copper and some arsenic oxide could be lost during casting.  Arsenic vapors are unhealthy and can attack the eyes, lungs and skin.  The use of tin probably afforded more control over the forging and casting processes.

* Arsenic (atomic element #33) is a metalloid that is used to harden both copper and lead.   Modern lead/acid car batteries usually feature some arsenic as well as some antimony within their lead components.  Long called the “Poison of Kings and the King of Poisons” arsenic is also a common and widespread groundwater contaminant.  Arsenic compounds were used as a vesicant (blistering agent) and or vomiting agent in Lewisite and Adamsite gasses used after WWI.   Arsenic is also used in the green pressure treated wood preservative known as CCA (Chromated Copper Arsenate).  Within CCA copper acts to slow the decay caused by fungus and bacteria, arsenic kills insects and chrome just helps bind or fix the other two to the wood.   When used as a discreet poison the “poudre de succession”  is not deadly in small amounts but can stay in the body and accumulate before it becomes lethal.  Two early developed analytical test in forensic toxicology were concerned with determining the presence of arsenic, namely the Marsh test and the Reinsch test.  Significant concentrations of arsenic in ground water are found in parts of New England, Michigan, Wisconsin, Minnesota, both Dakotas, Bangladesh, Vietnam, Cambodia, and China.

 * In ancient times lead was too pliable or ductile a material to make useful tools but that very characteristic allowed the Greeks and Romans to hammer and roll plumbing pipes to conduct water.  In Rome lead was used to line water cisterns or to pipe water to public drinking fountains or into the homes of the very rich.  In Rome the possibility of lead poisoning would seem to have been greatly reduced due to calcium buildup within the pipes.  Rome sits upon or near large limestone and travertine deposits.  A better source of lead poisoning if one were to be elected would be from lead dinnerware and acidic foods.   Wine for instance can easily leach toxic lead from goblets and cups.  

* Incidentally, bronze swords were often preferable to wrought iron swords.  Even in Roman times, officers commonly carried bronze weapons while the rank and file carried iron weapons.  Perhaps bronze swords were superior or simply more prestigious than their iron counterparts because they looked less crude and did not rust.  The Hittites (c.2000 – 1200 BC) are generally regarded as being the first iron-smiths.  Although their iron weapons were less brittle than hardened bronze weapons, these still had to be beaten or wrought from a bloom of roasted, not melted ore.  The inability to cast iron with a socket complicated the attachment of spear and arrow heads to their shafts.  It took about 3,000 years for furnace smelting technology to progress from copper melting temperatures to iron melting temperatures.  The first iron ore to be exploited was probably “bog ore” – a precipitate of iron oxide found in marshy areas, created by bacterial action and the decomposition of iron minerals.  Gradually the mining of rich hematite and magnetite ores occurred.   The Greeks used wrought iron beams in the construction of the Parthenon (between 447 – 432 BC).  The Romans occasional used T-shaped wrought iron girders in construction (as in the Baths of Caracalla).  Eventually it was realized that the carbon from charcoal created a stronger iron (steel).


the “Artemision Bronze” – circa 460 BC
source: Google free to use or share filter

The Greeks were masters of bronze casting.  Although the Greeks probably cast as many bronze sculptures as they chiseled from stone, the stone statues have remained where the bronze statues have not.   Every time a new war came along, bronze statues were hacked to pieces to provide the valuable metal needed to forge new weapons and body armor.   Very few Greek bronze statues have survived the ages and those that have been discovered in modern times (as in the image above – excavated in 1928) have been found underwater.   The Greek “Riace bronzes” date from about 460-450 BC and were discovered by a snorkeler just offshore of Riace, Italy in 1972.  A link is provided to this website because its fine pictures of the Riace bronzes can be enlarged.

Bronze alloys used for casting have the innate ability to expand slightly before they set and therefore all the fine nooks, crannies and scratches inside a mold are filled in with detail.  Life-sized hollow sculptures were made by a process known as the “lost wax process” (or as “investment casting” in jewelry-making or  industrial vernacular).

Horses of St Mark's or Triumphal Quadriga  from fotopedia image: courtesy of Nick Thompson

Horses of St Mark’s or “Triumphal Quadriga”
from fotopedia image: courtesy of Nick Thompson

It has not been determined if the horses of the “Triumphal Quadriga” are of Greek or Roman origin.   They are presumed to date to the 4th century BC and were stolen from the hippodrome in Constantinople where they had long resided, by Venetian troops following the Fourth Crusade (1202 -1204).   Napoleon stole the horses from the Venetians in 1797 and took them to Paris but they were returned to St Mark’s Basilica in Venice following the Battle of Waterloo in 1815.

Corinthian helmet : 500–490 BC

Corinthian helmet : 500–490 BC

The iconic Greek Corinthian bronze helmet is an enigma to modern intellectuals because they can’t determine how it was constructed.  Our understandings of the construction of other helmet designs of the period are less controversial.  Neither the process of casting nor forging by hammer stroke alone adequately explain how the classical Corinthian helmet was built.  The best explanation seems to be that it was a product of both.

Most bronze or brass alloys are denser and heavier than iron or steel.  In the 17th century almost all naval guns and terrestrial cannon were cast in bronze.  Bronze was the best material for the purpose but while being more durable than iron, it was also more expensive.  With the beginning of the 18th century, technology allowed the displacement of bronze artillery with more affordable cast iron pieces needed to supply growing armies and navies.  The weight of cannon and their placement aboard fighting ships (heaviest at the bottom) was an important consideration but weight was perhaps a more important concern for armies that had to tote them over hill and dale, streams and rivers.   Early French cast iron naval guns were notoriously dangerous and exploded with frequency while British, American, Swedish and Russian cast iron naval cannon were usually much superior.

The famous or iconic American Liberty Bell is an interesting story in bronze failure.   The metal in the Liberty Bell was cast not once, but three times: first in an English foundry when the bell was commissioned by the Pennsylvania Assembly in 1751, and then twice again by American foundry workers John Pass and John Stow.  The bell has a diameter of 12’ around its rim and weighs 2,080 lbs.  The bronze composition consist of 70% copper, 25% tin and a remainder a mixture of lead, zinc, gold, silver and arsenic.  The bell gained its moniker “Liberty Bell” from zealous abolitionist in the 1830′s, not from association with the Revolutionary War.  The one ton bell traveled or toured a lot considering its weight and there is disagreement over when its crack began.  Vigorous ringing encouraged a hairline fracture in the brittle alloy to grow into a wide crack.  The Liberty Bell rang last on Washington’s birthday in 1846, its sound after that no longer being acceptable.

The first brasses seem to have appeared somewhere around 500 BC and are sometimes referred to as calamine brass (calamine is zinc ore containing zinc carbonate or zinc silicate).  In early brasses calamine ore was introduced to molten copper and the zinc was readily absorbed, producing an attractive and useful alloy.  Zinc melts at 787 °F which is a temperature not much greater than that required to melt lead and which can be produced by a simple campfire.   Zinc boils and turns to vapor at 1,665 °F (907 °C) which is still lower than the 1,984 °F temperature needed to turn copper into a liquid.   For a long time not appreciated as a metal because heat caused it to escape as a vaporous gas, zinc production did not begin until about the12th AD century in India, the 16th century in China and (in large scale production) after 1738 in Europe.   In modern day zinc smelting, zinc sulfide is first roasted into an oxide called ‘zinc calcine’.   From there either electrolysis or any one of several complicated processes involving sintering (the electrothermic fusing of powders) or even the distillation of zinc fumes might be employed to retrieve the metal.


What usually distinguishes a brass from a bronze is the presence of zinc and a brighter or attractive golden color.  Brass is a softer, more malleable alloy than bronze and has some properties that make it uniquely desirable for some applications. Brass is used in bearings, gears, valves, locks, keys, doorknobs and clothing zippers because it has a low friction coefficient.  Brass does not spark as other metals might when struck.  Because of its desirable acoustic qualities and malleability, brass is the favored material for several musical instruments – especially horns.  Brass is the favored material in ammunition cartridge casings for a couple of reasons.  First, brass has the capacity to expand and contract quickly.  When a cartridge is fired in a firearm, the brass expands to fill the breach and prevents hot gasses from escaping rearward.  The brass then contracts to allow the casing to be ejected.  This action occurs quickly enough to allow for high cyclic rates of fire in machine guns.  Also brass’s softness and low friction attributes work more fluidly and cause less wear in the firearm’s steel mechanism than would any other metal.  While lead might be added to bronze to improve cast-ability, lead is added to brass to improve machine-ability.  California mandates that manufacturers of brass keys employ no more than 1.5% lead within keys sold in California, or otherwise label the product as potentially hazardous.

Do it Yourself


If the ancients were able to smelt copper, iron, gold and silver gold eons ago it seems reasonable that a lone individual should be able to duplicate that feat today.  Someone attempting to melt copper for instance will soon realize that it is not a simple task and that it takes concentrated energy to accomplish.  Above is an image of the bottom half of a homemade crucible furnace, the top has been temporarily removed.  In the center of a charcoal fire sits a crucible made from a scrap of square steel tubing that has had a bottom and two links of chain (for lifting) welded to it.  On the right side a rusty steel pipe conducts forced air from a hair drier into the bottom of the fire.


Above is the mold for the same bottom section of this crucible furnace, made from a plastic flowerpot and some tin cans.  A refractory mix was poured or tamped into the bottom 2” of the mold and allowed to dry, anchoring the wire reinforcement.  The tin cans were placed and now the mold is ready to receive more refractory cement between the large can and the plastic flowerpot.  Refractory is simply a building material that retains its integrity at high temperatures.  The refractory used was a mix of sand, Portland cement, fireclay and Perlite.  The ratios of the constituents used closely resembled this recipe.


Above is a downward image of the mold and a view of the finished result.  Note that on the left a bolt passes through a small hole in the set (or dried) bottom layer of refractory.  Presumably if the crucible were to leak during a cook, the molten metal should be able to run out the bottom and not be stuck in the bottom of the furnace.

A few notes about this furnace:  

- Even while the crucible and metal it held were white hot inside, the lid could be removed with bare hands – if done quickly.

- Fire at this heat combined with the forced air is very destructive of steel crucibles, both inside and outside.  Big flakes of iron oxide are almost guaranteed to sluf off and fall into and contaminate your precious metal.  The best crucibles are made of porcelain or graphite.

- The forced air should enter the furnace at an angle to encourage a whirl or vortex within the fire. 

- Although stoneware ceramics are fired and glazed at temperatures exceeding the melting point of aluminum, brass and copper, such ceramics cannot withstand such a rigorous acceleration in temperature.  You can expect a stoneware coffee cup crucible to shatter in a matter of minutes in such a furnace.

- The interior dimensions of this furnace are a bit too small to acquire a useful copper-melting heat from coal or charcoal alone.  There is simply not enough room for a crucible and enough charcoal at the same time.  Propane or waste oil would be better fuels for a furnace of this interior dimension.  These fuels can be introduced into the air pipe before it enters the furnace.  In the case of waste oil (any used automotive oil, diesel fuel or vegetable oil), it can be gravity fed, its viscosity potentially reduced by a lighter volatile or fraction, regulated by a simple valve and / or forced by a little additional air pressure.   

Jewelers crucible  File source: // compliments of  GOKLuLe

Jewelers crucible
File source: //
compliments of GOKLuLe

With a little effort an interested reader can find a wealth of information and instructables about crucible furnaces on the Internet.  Here are a few links to help such a reader get started.

In this video the furnace is constructed of stacked firebricks.   Brick furnace in snow.

This guy provides a good 3 part series on the construction of a backyard foundry.  In this video however he constructs his own graphite crucible.   Most people might simply purchase a graphite or porcelain crucible.  It is not necessary for a novice to go through all this trouble, but the information presented is useful.  “Making a Graphite Crucible“.

This video features a rather large furnace, requiring two men to handle the crucible.  “A Brass Casting Demonstration“.


If there is a Hot Stuff part 3 to come in the future it will discuss ceramics and glass.

Hot Stuff – metal, ceramics & glass

Hot metal - crucible furnace for melting metal

How hot is HOT?   

   While physicist agree upon an absolute lowest temperature (absolute zero – where even subatomic particles don’t move) there is no consensus or formally defined limit for a maximum temperature.  The best approximation of maximum temperature might be Planck temperature (1.4168 × 10^32 Kelvin).  That’s about 100 million million million million million degrees in other words.  Within a thermonuclear bomb a temperature of 50 million °C is needed to initiate the fusion of a deuterium and tritium tamper.  The temperature at the core of our sun is assumed to be about 15 million °C.  The fission bomb “Little Boy” dropped on Hiroshima generated a heat of about 299,726 °C at its core.  The surface of the sun and the earth’s inner core are both much cooler at about 5,778 K (5,505 °C) each.   We have no instruments like thermometers or thermocouples to physically measure even these relatively low temperatures but instead must rely upon idealized thermodynamic theory to extrapolate these numbers.   

Hot Iron and Steel (first)

   Through chemical decomposition, oxidization and other natural processes happening over geologic time, few metals are found physically in metallic form.  Most of the earth’s retractable metals are dispersed as small flakes or inclusions within an ore of some type.  Since gold, copper, silver, and metals of the ‘platinum group’ are not very reactive chemically, early man was occasionally able to find bits of these “native metals” just lying upon the ground.  Mankind likely first encountered metallic iron however, in the form of a deposited meteorite.  Humankind’s technological advancement through the early ages is typically categorized by its tool making progress.  An archeologist considers tools of the ‘Neolithic Age’ to be more complicated than those of the general “Stone Age” – but less so than those of the ‘Bronze Age’ (some may distinguish between Copper and Bronze ages because the latter infers the more sophisticated smelting of alloys).  It took primitive civilizations about 3,000 years to progress beyond the Bronze Age to the “Iron Age” alone.  Also associated with or occurring concurrently with a period’s tool making technology were changes in religion, artistic styles, agriculture and societal structure.  Beyond about 9,000 years ago most cultures had no reliable method to initiate fire.  It would take humankind another 88 centuries to develop an easy method to start a fire (as in the 19th century phosphorus friction match) but that is another story.  As humans experimented with the heat of fire they even cooked rocks and dirt – and significantly thereby created or discovered metals, ceramics and glasses.   Metal, ceramic and glass can be used to manufacture trade items which rank right up there with other achievements (like plant and animal domestication, division of labor and written language) to define what civilization really is.  This half-baked discourse intends to explore some simple metallurgy, ceramics and glass making.

  Smelting is the separation of metal from its ore.  The reduction of aluminum using electrolysis instead of heat can also be called smelting.  Smelting with heat is often assisted by adding a reducing agent and a flux.  When smelting iron, coke or charcoal are added to the crushed ore within a traditional blast furnace and act as a reducer in the redox (reduction-oxidation) reaction.  Carbon monoxide is produced as the oxygen is striped from the iron ore.  Limestone, carbonate of soda, potash and lime might be used as a flux or slag forming agent to absorb impurities into a slag that can be separated from the liquid molten metal.  With the low grade copper ores available today, soap bubbles and pine oils are frequently used as reagents to detach the metal from its crushed ore slurry.  The cyanide process (cyanidation) can be used to extract gold, copper, zinc or silver from their low-grade ores.  Mercury dissolves gold and can form amalgams with several other metals as well.  Easily separated from its crushed ore the gold can further be separated from the amalgam (in small samples) by squeezing it through a rag of chamois leather or by baking it in a potato.

  Before turning the discussion to simple blacksmithing some melting points (°F or °C) of some familiar materials are listed in ascending order below.  The temperature of molten lava depends upon its chemical composition.

Tin 449° F 232° C Sn, #50
Lead 621° F 327° C Pb, #82
Zinc 787° F 419° C Pb, #30
Antimony 1,167° F 630.6° C Sb, #51
Magnesium 1,202° F 650° C Mg, #12
Aluminum 1,220° F 660° C Al, #13
lava 800°C
Silver 1,763° F 962° C Ag, #47
Gold 1,947° F 1,064° C Au, #79
Copper 1,984° F 1,084° C Cu, #29
lava 1100° C basalt
Silicon 2,577° F 1,414° C Si, #14
Nickel 2,651° F 1,455° C Ni, #28
glass 2,700° F 1,500° C soda lime
Iron 2,800° F 1,538° C Fe, #26
Titanium 3,034° F 1,668° C Ti, #32
Platinum 3,215° F 1,768° C Pt, #78
kaolin 3,275° F 1,800° C porcelain
Vanadium 3,470° F 1,919° C V, #23
glass 4,200° F 2,300° C silicon-
Molybdenum 4,753° F 2,623° C Mo, #42
Tungsten 6,192° F 3,422° C W, # 74
 * Carbon C, #6

* Allotropes (forms) of carbon have the highest thermal conductivities of all known materials and they don’t melt.  Carbon undergoes sublimation at about 9,980 °F (5,530 °C) which is to say that the element transitions from a solid to a gas without passing through a liquid phase.  Carbon is also the fourth most common element in the universe by mass, forms more recognizable compounds than any other element and is the chemical basis or building block for all known life. 

In the previous table antimony, vanadium, molybdenum and tungsten are used in small amounts to make alloys and are only included for the sake of curiosity.  Antimony is not a metal but a metalloid.  Like gravel compliments the integrity of concrete, antimony combined with tin, hardens lead for bullets or linotype (the lead alloy historically used for typesetting).  The biggest use for antimony today is in the production of lead acid type automotive batteries and to harden the lead wheel weights used when mounting and balancing new automobile tires.  Vanadium, molybdenum and tungsten serve mainly as steel alloys or as catalysts.  Vanadium is useful in tool steels like drill bits, where it facilitates higher possible working temperatures without sacrificing temper (hardness).  Molybdenum improves steel by restricting its expansion and softening at higher temperatures and was commonly used in artillery pieces and tank armor.  Molybdenum also improves the corrosion resistance and weld-ability of steel.  Tungsten is a rare element but having a very high melting point found use in light bulb and x-ray tube filaments.  Used in cutting tools and abrasives, tungsten –carbide tipped implements are almost three times harder or stiffer than plain steel.  Chromium is yet another metallic element which is often found alloyed within steel.

*  House fires and even forest fires can sometimes reach impressive heats.  Stones in masonry chimneys have been known to explode like bombs when the attached cabin or dilapidated house is burned down.  The pressure probably comes from steam created by moisture trapped within the rocks.  Uniform Building Codes (UBC/IBC) stipulate that steel beams if used to support the roofs of modern wood framed homes and buildings, need to be shielded from possible flame.  Without flame and heat protection steel girders might quickly soften, sag and collapse, leaving potential victims with no exit from the building.  At their flame front wildfires can heat the surrounding air to 1,470 °F (800 °C).  If fed by wind the internal temperature of a wildfire might surpass 2,192°F.  That’s a temperature high enough to substantially soften steel or liquefy several other types of metal.

Cupola Furnace source: Released into the public domain (by the author)

Cupola Furnace
Released into the public domain (by the author)


A “bloomery” was the earliest form of furnace capable of smelting iron from ore.   Having a channel for air flow at the bottom the simple bloomery structure was typically sacrificed to retrieve the metal.   Early blacksmiths often worked with iron wrought from a bloom.   A ‘bloom’ (cruder than ‘pig iron’ from a blast furnace) is a porous, impure mass of iron and slag (video links one & two).  The hot bloom was hammered, reheated, pounded, twisted and pulled to squeeze out the slag.  Wrought iron is the almost pure iron product produced by all that excess labor (another video).   Wrought iron is very rare today – its main source being from antique structures or implements.   In its place modern blacksmiths use malleable and ductile low-carbon or mild steels.   Low carbon steel contains about 0.05–0.15% carbon while mild steel is about 0.15 –0.3% carbon.  Further carbon proportions quickly become harder and more brittle.  High carbon steel might contain between 0.6–2.0% carbon.

Although not as old as bloomeries the modern blast furnaces used to smelt ore today are merely embellishments of a design used in the Middle Ages (or since the 1st century AD in China).   Fed from the top by conveyor belts of ore, coke or coal and limestone (flux) the big chemical reactor called a “blast furnace” works continuously, year after year without being shut off.   It might take an individual atom of iron 8 or 9 days to work its way to the bottom of the furnace.  The name “blast furnace” reflects the fact that air (hot air in modern times) is forced into the bottom.  Crude “pig iron” is the product produced from a blast furnace and this is processed later to become steel.  The “Bessemer process” for economical industrial steel making was patented in 1855 and was the prevalent steel making method for about a century afterward.  The process involved re-melting the pig iron and removing impurities by blowing air through the molten iron.   Following WWII regenerative “open hearth furnaces” began displacing previous Bessemer converters.  Using exhaust gasses to preheat incoming fuel and air, open hearth furnaces operated much more slowly thereby offering more control over the process, allowing the refining of scrap metal along with pig iron and reducing the amount of undesirable nitrogen introduced to the reaction.  By the 1990’s most industrial open hearth furnaces were themselves displaced by the Basic Oxygen Furnace (BOF) and non inductive Electric Arc Furnace (EAF).  The BOF is in essence a refined Bessemer converter where pure oxygen rather than air (which is about 78% nitrogen) is injected into the molten metal.   Perhaps situated next to a blast furnace, the BOF accepts already molten pig iron, mixes in perhaps 20-30% scrap steel and injects oxygen at supersonic velocities.  Great heat is created, the scrap steel is melted and carbon and silicon are oxidized.

Iron ore is basically iron oxide so producing iron metal necessitates removing the oxygen.   This “reduction” is accomplished by using carbon.  At elevated furnace temperatures the strong chemical iron-oxygen bonds in ore are swapped for even stronger carbon-oxygen bonds.  Coke is analogous to charcoal or char cloth and all three are products of “pyrolysis” (the act of driving off volatiles with heat in the absence of oxygen).  Char cloth (from fabric) and charcoal (from wood) are created in an oxygen deficient environment.  Likewise coke is made in a ‘coke oven’ where coal is heated in the absence of air to produce a hard porous material of almost pure carbon which will burn twice as long and produce twice the heat as the original coal.  Coke won’t burn by itself without the forced air or oxygen blast from a blower.  While coke is unanimously preferred over coal for steel making it is also important to make it from coal selected for low sulfur content.  That nasty, odorous and very effective wood preservative used on telephone poles and railroad sleepers is actually a byproduct of the coking process known as coal tar “creosote”.

Blacksmithing 101

Before the appearance of acetylene cutting torches, arc welders, electric drills and saws the principle utilitarian metalworking tools were forges, hammers and anvils.  It is possible occasionally to find portable forges still being used occasionally by cowboys to heat branding irons or by farriers to bend horseshoes.  Forges still have practical applications in this day and age because iron and steel become almost docile and easy to work with when hot.  Looking like backyard charcoal grilles the portable blacksmith forges labeled b & c in the following image probably served in just that capacity on several occasions throughout the last century.


Introduced in the late 1870’s forges resembling images a, b & c replaced the traditional bellows with a geared turbine or blower.  In examples a & b the blower is powered by lever, in example c the geared blower is cranked by hand.  In example d a shop vacuum is used to blow a strong stream of fresh air up through the bottom of the forge.  The concept first published in Popular Mechanics magazine in 1941, example d incorporates a kitchen sink.  One bay of the sink is lined with a cementatious refractory or firebrick while the other can be filled with water for quenching hot metal.  The airflow from the shop-vac or other blower is split between the bottom of the pit and a tube which creates an upward draft in the hood and chimney flue.  A PVC ball valve between the vacuum hose and metal drain pipe adds control to the airflow at the bottom of the forge.


A London pattern anvil
– a design perfected some 300 years ago

Appreciating the many complications of metallurgy takes high science but a rural blacksmith can somewhat refine iron or steel by understanding just a few basics.  By heating metal until it is soft, the blacksmith can easily bend and shape it, cut it, weld it and punch better holes than he can cut with a drill.

The temper of hard steel can be ruined and lost by overheating it.  High-carbon / hard steel must be worked at a lower temperature that mild steel would normally be.  Annealing or softening of carbon steel is accomplished by getting it red hot and then setting off to the side in the ashes to cool slowly.  Annealing might be useful to relieve stresses inside a bent piece of steel before it is to be hardened.  Hardening of carbon steel is accomplished by cooling it quickly – usually by dunking the item in water.  Steel hardened this way can become too brittle sometimes. To temper a piece of steel to a desired compromise between brittle and tough the hardened item is reheated once again – but this time to a lower temperature.

From a microscopic perspective mild steel has a fibrous or stringy structure while hard steel has a fine granular structure.  The blacksmith can distinguish between grades of steel by observing the sparks thrown off when grinding it.  Sparks from mild steel are red or yellowish and fly in straight lines.  Sparks from hard steel are lighter/ brighter in color, sprangled in flight and seem explosive.  The blacksmith develops the ability to judge temperature by observing the color and glow of the underlying heated metal as well as the color of the oxide or scale formed on its surface.  Ranging anywhere between dull red and bright white the glow should be judged in the shade, not in direct sunlight.

Wrought iron or mild steels are forged at yellow heat and using sand as flux between pieces, welded at white heat.  High carbon steels are forged at a lower red to low orange heat and are generally not welded by the blacksmith method.  Overheating tool (hard) steel is likely to destroy the grain structure.  The critical temperature for tool steel is indicated by a dark red color and ranges somewhere between 1,300 and 1,600° F depending upon carbon content.  Heavy hammering a piece of steel upon an anvil at a little above the critical temperature has the effect of reducing the grain size and refining the steel.  The hammering strokes preformed by a blacksmith are not thoughtless or random but are instead precise and calculated.  Light hammer strokes are to be avoided while medium, heavy and extra heavy strokes have their appointed applications.


In addition to a forge and a good heavy anvil that won’t bounce around a blacksmith might have a vise, a pair of tongs, an assortment of cross-peen hammers and a few hardys.  A hardy is an accessory which fits in the hardy hole; it has a square base so it won’t rotate by accident.  Hot metal is cut on the table (or chipping block) which is supposedly made of softer metal than the face – which is made of hard steel and should be kept free of mars and scratches.  Hot chisels are for cutting hot metal and are made of mild steel.  Cold chisels are made of hard steel and should be used for cutting cold metal only.  Holes in hot metal are initiated by punching them upon the face from both sides with a round punch, before being moved over the pritchet hole for completion.

As fuel for the forge the blacksmith can use hardwood, charcoal, coal or coke.  A high quality soft coal free of sulfur is considered the best choice.  Charcoal which comes from wood and coke which comes from coal are both produced by the process of pyrolysis.   Although the volatiles are driven off of the constituents from which they are made, the resulting charcoal and coke have higher carbon contents and therefore make more efficient fuels.  The problem with coke in a blacksmithing furnace is the fact that a steady stream of air from a bellows or fan is needed to maintain its combustion. * Charcoal can be made by digging a hole in the ground, filling the hole with wood and then igniting it.  Once the wood is burning furiously the hole is smothered by sheets of roofing tin perhaps and dirt is spread over that.

The fire in a forge is kept small and tidy, its size proportionate to the work required.  Clinker or slag is removed periodically, fuel added as needed and perhaps water even sprinkled around the rim of the fire to keep the combustion from spreading to an area larger than necessary.  Normally a uniform heat needs to be applied to a piece of metal and so the item is laid horizontally in the fire, not pointed down into it.

Note to self: Not only were the redoubtable subjects of ceramics and glass not discussed but important workable metals like bronze & brass deserve honorable mention also.  These subjects need to be attended to in subsequent post…


Time & Direction (Part 2)

  While survival books describe and survival kits usually contain the obligatory compass, few modern outdoor adventurers actually do know how to use one.  After examining the utility and limitations of compasses, this post continues to advocate that accurate navigation is dependent upon accurate timekeeping.  Two and a half centuries ago seafarers relied upon their best guess called “dead reckoning”.  The appearance of the sextant and chronometer combined with celestial observation finally allowed for the fairly accurate fixation of a position.  Modern commercial airliners have compasses, radar and other radio backup navigation devices.  While today pilots and some Naval officers are trained to navigate by alternate means, it is upon GPS that they predominately rely.  To provide a good 3- dimensional fix or position to a GPS receiver, at least four geosynchronous satellites in orbit have to know their own latitude, longitude, altitude and critically – the accurate time (which is carried on-board).


Even in the hands of an amateur a compass can be useful.  For example once a simple bearing of a distant object is taken then a person can walk through darkness or dense vegetation to get there by following the bearing.  He can also return to his starting position by reversing his bearing (either adding or subtracting 180°).   If a couple of unique landmarks are visible in the distance then re-locating a position can be simplified by using a compass.   If for example a prospector wanted to bury a treasure in the desert but be able to return to retrieve it; he could take two readings of widely separated landmarks at that location.   Months later, he could return to the general area and walk until he acquired his first bearing to the associated landmark, and then walk along that vector until he acquired his second bearing to its associated landmark.

Assuming that one is in the wild and wants to leave a point of origin on a long walk  but return, then a compass, pencil and sheet of lined notebook paper can be very handy.  Because of hills, gullies or other obstructions the direction of travel might change several times.  Using the lines of the notebook paper as visible representation of magnetic north / south lines, recordings of changes in compass bearings and distances in-between (measured perhaps with something like paces – every drop of the left foot) can be projected upon the paper so that a different route, a direct short cut back to the point of origin can be navigated if so desired.


Compasses are consistent within small localized areas but when longer distances of travel are involved they become unreliable for navigation unless adjusted.  The problem is that compasses don’t point to true north.  Compasses don’t even point to magnetic north come to think of it.  Compasses actually align themselves with the sum of many – sometimes conflicting lines of magnetic force.  Compass readings can be affected by metal objects worn or carried by a human, by the metal in a ship or aircraft and depending upon location – by unseen anomalies beneath the surface of the earth.  What we call the magnetic north pole doesn’t actually stay in one place and lately its rate of movement has been accelerating.

It is assumed (since no one has actually seen it) that the earth’s viscous molten rock mantle and liquid outer core are heated by a continuous and ongoing radioactive decay.   Complex convection currents in the magma cause not just one but several magnetic dipoles to occur – and these have different orientations and different intensities.


Another factor called inclination or magnetic dip also pulls upon a compass needle.   Near the equator, magnetic lines of force are roughly parallel to the surface of the Earth.   Near the Poles however, the compass needle dips dramatically perpendicular to the Earth’s surface because of influence from the planet’s lines of magnetic force.   There is a somewhat confusing distinction between the averaged geomagnetic pole and the regular magnetic dip pole where in the northern hemisphere a compass needle should point directly downward. * Kyoto University charts of geomagnetic and magnetic dip pole locations.

  The magnetic dip pole has wandered more than 1,000 kilometers from where its location was 180 years ago.  Presently its position is something like 1,170 kilometers (727 miles) from where the true or geographic North Pole really is.   Back in 1831 the magnetic dip pole was moving about 6 km per year but that rate has accelerated to a disturbing 24km per year.   Every 200,000 years or so the earth’s magnetic poles do a ‘flip’, and such a geomagnetic reversal is long overdue.

* A web site suggesting that the current rate of pole shift is alarming.

Most maps or nautical charts are drawn with a true geographic north / south orientation as determined by the axis of the earth’s rotation.  Such maps will often feature a “compass rose” somewhere to indicate compass magnetic variation or declination.  Since there are so many fluctuating and unpredictable forces influencing compass readings these maps or charts need to be updated on a regular basis to be useful.  Maps or charts older than a year or two might possess incorrect magnetic variance which could lead to gross navigational error.  If for a small example a hiker was unknowingly following a compass error of 14°, he would stray off course ¼ mile for every forward mile of travel. 



Higher resolutions of these two magnetic declination images are publicly provided by NOAA (National Geophysical Data Center) and are available <here>.   [ One should note that between these two images that the (+/-) colors of the isogonic lines are reversed ].  From the second image it can be appreciated that for an airplane traveling up or down the N. American Atlantic coast perhaps, continual compass corrections must be made to maintain precision.  In the past when compasses were the dominant navigational aid for pilots or ship captains, astute attention was paid to such variation.  Today a pilot of a small plane would probably plot a course on a map and then convert those true north bearings to magnetic north bearings that his instruments would actually use in the cockpit.  Many of today’s affordable GPS receivers which read true north are actually capable of quickly calculating magnetic north for a pilot.

Useful magnetic field calculators which will estimate the magnetic declination for any location, can be found on the Internet <Canadian>

Compasses have their shortcomings as they are affected by unseen forces within the earth, by the metal of a ship or aircraft, watch on your wrist, metal trinkets in your pants’ pocket or even by solar flares.  Only when these complications are understood and compensated for do compasses deserve respect as dependable navigational aids.

Mariners of old had to be on constant vigil night and day, prepared for any eventuality from oncoming bad weather to avoiding obstructions like uncharted islands.  In those days any charts (sea maps) they may have possessed were inaccurate.  The technology necessary to accurately fix a position upon a chart was missing.  Exploration and commerce by sea was exploding by the beginning of the 18th century.  Nautical disasters due to navigational errors were very common.  For instance in 1707 one of the worst maritime disasters in British history occurred in the darkness as part of the Mediterranean Fleet wrecked on the Scilly Isles.  Four ships, more than 1,400 sailors and the miscalculating British Mediterranean commander – Admiral Shovell were lost. <link>.   Otherwise, ships and seamen weren’t getting lost in the typical sense, but they lacked the ability to return to a previously discovered island which might be out in the middle of an ocean for example.  By 1714 the problem of accurate navigation was so intense that the British Parliament passed the “Longitude Act”, awarding a (£20,000 / $4.8 million today) cash prize for a suitable solution for determining longitude at sea.


The open ocean can quickly become one of the most inhospitable, life threatening environments imaginable and this was especially true in previous centuries when sailing ships were little more than slap shod wooden corks bobbing about in a liquid immensity.  The life expectancy of the average sailor was low but the pay was high.  Falling out of the shrouding or being washed overboard in a storm were constant hazards.  On the long exploratory type voyages beginning around the 15th century, sailors also contended with cramped living space, squalor, bad food and water, hunger, communicable and other diseases like scurvy caused by malnutrition and lack of vitamin C.  Although the modern reader may hopefully never be subjected to similar or such persistent rigors of day to day survival he or she might try to appreciate how crucial navigation at sea was then to one’s own life or death.  These explorers, facing the unknown on the far side of the world were adventurers and heroes in the strictest sense and their accomplishments in the face of hardship have seldom been equally matched in modern times.

Before the first publication of lunar and astronomical data in 1767 the main (European / Western) typical navigational aids in use in addition to the compass were things like astrolabes, cross staffs, backstaffs, quadrants, traverse boards and chip logs.  To measure the speed in knots – a ‘chip’ might have been simply a large clump of tethered wood thrown over the side of a ship.  As the ship moved forward the line played out in a seaman’s hands per time interval and the equidistant ‘knots’ in the cordage were counted.  ‘Dead reckoning’ is the process of determining one’s present position or plotting one’s future course by projecting course and speed from a known past position.  The effects of wind drift (leeway), current drift and steering error are usually not considered.  For centuries whaling and fishing ships which did not require critical navigational accuracy, managed to exploit and return from the sea with just these early navigation aids combined with dead reckoning.

Lunar distance navigation is an early type of celestial navigation but the two phrases have come to signify separate processes.  When using the Sun by day or Polaris by night, determining latitude was a fairly straight forward process whereas accurately determining longitude was not.   Moving approximately the distance of its own diameter every hour against the background sky, the moon’s position in relationship to another prominent celestial body can be used to render a fair approximation of GMT and therefore the longitude.  The angle between the moon and another body like Jupiter is corrected for parallax and searched for in a table of lunar distances to determine the time such a distance should occur.  The process is tedious and complicated.  The first authoritative Nautical Almanac full of moon transit data and other celestial sight reduction datum was published 246 years ago by the Royal Greenwich Observatory and is still in publication.  The main utility of complicated lunar distance navigation lay in its ability to predict GMT in the absence of an adequate timepiece.


– the latitude can be estimated with a simple protractor

The octant was a navigational aid that made its first physical appearance by about 1742 and soon replaced or supplanted older instruments like the astrolabes and cross staffs.  Mathematician and physicist Sir Isaac Newton dreamed up the reflecting quadrant concept some four decades before one was actually constructed.  Stemming from the Latin word “octans” the octant’s frame spans one-eighth or 45° of a circle and can measure arc angle as large as 90°.  Octants were very useful for most meridional altitude measurements of the sun or for some celestial readings, but the arc of measurement was limited or inadequate to capture both the horizon and bodies positioned more directly overhead.


The sextant gets its name from the Latin word for one sixth – “sextāns” and has an arc spanning 60°.  Also using two mirrors the sextant can measure angular distances between two objects that are as much as 120° apart.  Taking a reading with a sextant is simple but knowing what to do with that number – not so much.


To use the sextant the telescope must be focused on the horizon. The celestial body to be shot is found and the sextant aimed at it.  The body is brought down to the horizon by moving the arm along the arc and then the arm is clamped.  The micrometer knob makes small adjustments while the instrument is swayed slightly from side to side, until the heavenly body just brushes the horizon.  When this is achieved a note is instantly made of the time, seconds first, then minutes and hours, then the name of the body and its observed altitude. Every second of time counts – an error of 4 seconds equates to an error of a nautical mile in the position.   In modern practice for people still skilled in this science, a sextant plus a navigational almanac with trigonometric sight-reduction tables still permit navigation by Sun, Moon, visible planets, or any one of 57 navigational stars whenever the horizon is visible.

The sextant largely displaced the octant because it was a more capable instrument.  The two were used side by side for many years, with the less expensive octant being delegated the more routine task wherever applicable.  The Lewis & Clark expedition to explore the American West, toted both an octant and a sextant with them on their overland  two and a half year / 7,689 mile trip.  When encountering mountains obstructing his view of the horizon, Lewis employed an artificial horizon made from a dish of water.  This <link> illustrates some of the processes involved when correcting these optical readings.


– gimbaled chronometer (c. 1840)
Google free to use or share filter.

Getting back to the Longitude Act of 1714 and the British Naval and Merchant Marine’s desperate quest for an easier, more accurate method of establishing position we finally fit the last piece of the jigsaw puzzle with a crucial timepiece.  The first chronometers were so important and so expensive that they are still recognized by individual names.  After many years of effort and perseverance a clock maker named John Harrison (1693-1776) finally produced by 1755 a timepiece (H4) with exceptional accuracy that was capable of withstanding the rigors of life at sea (continual- occasionally rough motion, humidity, salty environment, temperature change).  Finally accurate navigation and map-making became simplified and achievable because along with the sextant and nautical almanac, an accurate representation of GMT could now be carried around the world.  Harrison’s recognition and reward were to be long frustrated by ‘The Board of Longitude’ but that is another story.

* A chronograph is a wristwatch with a stopwatch feature. 

* The H4 made two trial trips Jamaica and Barbados.

* A simpler less accurate chronograph named K2 accompanied an expedition in search for the Northwest passage in 1771 and later the infamous Captain Bligh before the mutiny on the Bounty.   K2 changed hands many times.

* K1 which was Kendall’s faithful copy of Harrison’s H4 participated in Cook’s 2nd & 3rd voyages.  K3 was also carried on Cook’s 3rd voyage.

* Gene Roddenberry’s protagonist ‘James T. Kirk’ captain of the star-ship Enterprise in Star Trek books, TV series and movies was probably named after the real life explorer ‘Captain James Cook’ – who literally did go “where no man had gone before”.   Skilled in mathematics,  Cook’s chart making of Newfoundland and the St. Laurence during the ‘Seven Years Years War’ brought attention to his talent by the The Royal Society of London.   Accompanied by other talented men Cook’s 1st  voyage (in HM Bark Endeavour) lasted 3 years (1768-71).  The expedition was the first to circumnavigate and map New Zealand and explore and map the also unknown east coast of Australia.   Cook’s  2nd three year voyage (1772-75) in the ship HMS Resolution, among other feats circumnavigated the globe at a very inhospitable southern latitude (crossing the Antarctic Circle), found a solution for scurvy and claimed and mapped South Georgia and the South Sandwich Islands.  On his final voyage (1776-79) Cook (accompanied by K1, K3, HMS Resolution and midshipman- turned lieutenant William Bligh) discovered Hawaii, explored the western American coastline from California to the Bearing Straight and in search for a Northwest Passage sailed for many weeks above the Arctic Circle.   Returning to Hawaii instead of Tahiti for resupply of provisions, Captain Cook was killed and literally cooked by the Hawaiians who may not have eaten him but kept his bones.   Of all the peoples they encountered (including a Russian in Alaska, Eskimos, Nootka s, Australian aborigines, Tongans, Samoans, Fijians, Māoris and Indonesians) it was the too often visited Tahitians that Cook and Bligh viewed with something akin to contempt. 

This post merely glazes over the mechanics of practical navigation and goes into no detail.  Oceanography and meteorology are important earth sciences that a well rounded marine navigator should have some knowledge of.  Geodesy is a science concerned with the exact positioning of points on the surface of the earth.  Variations in gravity affect measurements of the earth’s surface and therefore are pertinent to understanding precision navigation.  Navigational mathematics requires a solid understanding of plane and solid geometry and plane and spherical trigonometry.   For the reader wishing to learn more about traditional navigation there are many resources – but perhaps none more respected than two old publications mentioned next.

Patterned after an older book of the Royal Navy but much improved, Nathaniel Bowditch’sAmerican Practical Navigator” has been published for over two centuries now and has been revised and updated about 55 times.  The current Bowditch retains its well written curriculum on traditional navigation technique and adds new information on radio detection finding, LORAN, radar navigation, satellite positioning and so fourth.  The current Bowditch is published by the Defense Mapping Agency -Hydrographic Topographic Center but is also available as a free 35.9 Mb / 882 page /.pdf download.

Being printed now in its 67th edition, Chapman’s Piloting & Seamanship is only 100 years old.  This 928 page book began as a request and commission for a training manual to be written– from the Assistant U.S. Secretary of the Navy, Franklin D. Roosevelt.  Full of navigation educational information, this publication is more geared to the needs of small watercraft rather than for large oceangoing craft.  The book is not free but with 3 million copies having been printed it can easily be found in libraries or in new or used versions on  and perhaps eBay.   <link>

 [ Note to self - ]



The GPS (Global Positioning System) works by land based receivers interpreting the read only microwave signals that are broadcast from satellites orbiting in space.  GPS satellites continually broadcast time, orbit location, health status and a continual system almanac (locations of other satellites) back to earth.  Not the first satellite positioning system, GPS is still the only current fully operational such system and is of critical strategic and economic importance.   As this technology continues to supplant older navigational technology the reliance upon GPS becomes more inflated.  GPS’s ever-growing hegemony faces attack on several different fronts.

Each of the 32 or so currently working GPS satellites in medium Earth orbit house a cesium based atomic clock.  Although other atomic clocks can exploit hydrogen or rubidium it is the very precise frequency of resonance in cesium-133 that these space born timepieces employ.  *Cesium (or caesium (Cs) – atomic #55) is a very reactive alkali metal that is liquid at room temperature.  Aside from its mild radioactivity cesium reacts very violently with water and is employed in certain military flares that can only be seen using infrared or night vision equipment.  Also, the SI (metric) second is currently defined by interactions or periods of radiation within the cesium-133 atom.  That very constant frequency (9,192,631,770 Hz) corresponds to an oscillation frequency in the SHF-X microwave band.  The GPS program began somewhere around 1973 and the first experimental satellite was launched in 1978.   In 1978 Korean Air Lines Flight 902 wandered (due to bad navigation) over Soviet airspace and was shot down.   Another (Korean) civilian airliner (KAL 007) was shot down by a Russian interceptor in 1983 and at the time something like 8 GPS satellites (sill in the feasibility stage of development) might have been in orbit.  After 269 passengers in the Boeing 747 were killed in KAL 007 the U.S. president (Regan) ordained that some GPS technology should be made available to civilian navigation for the public good.   Following another presidential order (Clinton) in 1996 the GPS “selective availability” feature was turned off and global users could receive a non-degraded signal.  These navigational satellites have a short lifespan.  As of 2013, sixty four GPS satellites have been launched – the older non functioning ones were retrieved while NASA still had Space Shuttles.

With the help of their atomic clocks every GPS satellite generates two UHF microwave carrier signals which beam information back to earth – (L1) @ 1.57542 GHz and (L2) @ 1.2276 GHz.  Any GPS receiver on earth must determine its own position from the differing travel times of signals from several satellites.  At least four GPS satellite signals are required to give a proper 3-dimensional fix (time, altitude, latitude and longitude).  Most receivers however have software to extrapolate approximate location while receiving signals from only three satellites (using last known location in the calculation for example).   Some civilian applications that do take advantage of the practically free and highly accurate GPS time signal are: the timing of traffic signals, cell phone base stations to synchronize signals, stock exchanges and financial intrusions to track money transfers.

In times of war the Department of Defense still has the ability to deny precision positioning information to enemies.  It would not do to provide the very guidance necessary to allow ballistic missiles to come crashing down upon your head.  “Selective availability” which was an intentional corruption to deny full system accuracy to unauthorized users has been turned off.  The “P” (Precision) code can be encrypted however and then only DoD authorized receivers could use it.  There are also export restrictions to GPS receivers that are capable of working at high altitudes and at high speeds (as might be used by ballistic missiles).

GPS signals are necessarily weak because there is not a whole lot of power available aboard a satellite to broadcast them.  In localized areas the weak GPS signals could be vulnerable to intentional jamming, deliberate or non-intentional interference and to falsified signals carrying the GPS signature.  On the home front GPS reception can be threatened by upstart communication companies.  After acquiring radio spectrum close to GPS spectrum, one prospective broadband company proposed an action that would have drowned out GPS reception for everybody.   * This article (Dec, 2012) indicates that GPS is vulnerable to deliberate manipulation and attack.  

 * Russia, China and the European Union either have or are constructing their own competitive (to GPS) satellite positioning systems.   The Russian GLONASS (GLObal NAvigation Satellite System) has comparable capability with GPS and was completed with 24 satellites in orbit by 1995.  As with GPS however GLONASS satellites require periodic replacement.  As the fabrication and launching of these satellites is a costly drain on the Russian economy, the GLONASS navigational system has not been fully functional in recent times.   In July 2013 a rocket carrying 3 new GLONASS satellites crashedSome commercial receivers and cell phones are capable of using both GLONASS and GPS signals.   The Russians are currently trying to establish a few GLONASS monitoring stations in the U.S. 

* Called “Galileo” the EU’s version of  GPS has been delayed because of the complication and sheer expense of launching and establishing the necessary satellite infrastructure.   Presently (Nov. 2013), there are 4 Galileo satellites already in orbit and a total of 22 will be needed to meet full operational capacity. 

*The Chinese alternative to GPS is called “Compass” or “Beidou” (after the Big Dipper constellation).  With 16 satellites already in service the Chinese positioning system hopes to be fully global, with 30 by 2020.


Added 11/20/2013 — To encapsulate some additional navigational topics:

In the 1920s and 30’s civilian passenger service and mail service grew rapidly and congestion placed air traffic control problems in the lap of budding airports.   During this period German engineers and aviators were at the forefront of developing radio navigation.   Using direction finding loop antennas, a system was developed that allowed aircraft to approach airports at night or bad weather by “riding a radio beam”.   The Elektrik –Lorenz beam became the first successful blind-landing aid / standard beam approach and was followed by others like the longer ranged Elektra-Sonnen, British Consol, and American VOR (VHF Omni Directional Radio Range) which is still in use around the world today.   The LORAN (LOng RAnge Navigation) was a terrestrial radio navigation system used by both ships and aircraft, from WWII until three years ago (2010) when it was shut down for budgetary reasons .  Although many small craft still relied upon LORAN,  it was considered redundant in view of the new GNSS (Global Navigation Satellite System) systems.

The modern inertial navigation system (INS) aboard aircraft, spacecraft, ballistic missiles, ships and subs calculate a dead reckoning position continuously.   A good INS computer gets input from multiple sources: gyroscopes, velocity meters, accelerometers and augmented (or corrected) satellite positioning information from land bases.  With tens of thousands of people arriving and departing simultaneously from congested airports, the importance of navigational precision and situational awareness in aircraft control towers is paramount.  Three augmented air navigation aids presently in existence are the FAA’s Wide Area Augmentation System (WAAS), the European Geostationary Navigation Overlay Service (EGNOS) and Japan’s Multi-functional Satellite Augmentation System (MSAS).


The following figures all point to the very same prominent landmark.

33° 40’ 38.28”, -106° 28’ 31.44”  (traditional)

33.6773 N , 106.4754 W  ______(decimal)        

33° 40.638 N , 106° 28.524 W __ (GPS)

There are at least three common ways to designate a position of latitude and longitude (the traditional degrees, minutes & seconds format, the decimal format and the not quite standardized GPS format).  Conversion by hand, between these formats can be tedious and prone to errors.   Here is a handy online converter.

Toy bullets

Realizing that this subject will not be of interest to everyone, I apologize.  There are many people out there in the world however that legally and responsibly own firearms and who can still respect these as implements of recreation – not as devices used just for killing.  I personally am fortunate enough to live in a remote, unpopulated area of the American southwest.  With ease I can move myself many miles from civilization, and shoot to my heart’s content without disturbing other people.  Not having hunted in years, I don’t persecute nature’s critters, which is not to say that I would not if I became hungry.  Collecting guns and target practice are hobbies that I intend to continue to enjoy until I lose interest or unless my society ordains to completely confiscate these privileges from responsible citizens.  “Reloading” is a skill set that an overwhelmingly large percentage of the shooting public can neither do nor is willing to perform.  With reloading equipment an enthusiast can save much money while replenishing his supply of ammunition and with expertise he can produce better and more accurate ammunition than he can buy in the marketplace.  The subject of this post is to focus on a narrow set of rare, somewhat frivolous and purely recreational ammunition that simply cannot be purchased, but must be handmade.


an assortment of different caliber pistol brass

Wax bullets are most suitable for indoor pistol target practice.  These projectiles are made from melted paraffin, candle or bee’s wax and they are propelled only by a primer – no powder is used.  These projectiles are not especially accurate, they have a short range and are hardly lethal but they should be used with caution.  The primer alone provides enough propulsion to send the wax projectile through the side of a cardboard box or aluminum soda can at close range.

To begin, perhaps a dozen or so spent cartridge cases are “de-primed”.   Next the wax or paraffin is melted and then poured into a pie plate or metal lid to the thickness of about 12mm (or ½”).   It takes a while for the paraffin to solidify.  Before the wax completely hardens – at a point where it is malleable and not sticky, the empty and de-primed shell cases are pressed down into the wax, cutting off plugs which seat flush with the top of the cases.   After the wax has hardened the case can be primed with a new pistol primer and the cartridge is ready to go.


* Wax slugs can be used in shotguns also.  Using wax in a lead ‘slug mold’ perhaps, these toy bullets are also driven only by propellant from a primer.  Shotgun primers are much more energetic than pistol and rifle primers however.  A shotgun primer combined with a wax shotgun slug (lots of mass) creates enough kinetic energy to easily penetrate sheet-rock walls.  These should be used outside.  


At least one company still makes plastic practice ammunition.  Available in rimmed (revolver type) calibers .38 / 357 and .44, these projectiles are reusable.  Designed to use the larger and more powerful Magnum pistol primers, these can reach a velocity of perhaps 300 ft/sec.


If one has a lead bullet mold he can make durable, reusable indoor practice bullets from hot glue.


There is a fair selection of reloading tools available, but these tools are not numerous because demand is not great.  On the low end a simple kit like the one imaged above cost about $30.  Such kits are available for almost any rifle or pistol caliber and although a bit slow they are perfectly adequate.  The original once fired brass might need to be reshaped once in the resizing die for the cases to seat in the gun’s chamber comfortably.  Thereafter for wax, plastic or hot glue bullet reloading only the primers need to be replaced.

Next is a foray into the lost art of making “ultralight” cartridges.  The only kind of commercial ammunition available for high powered hunting rifles is maxim power / high velocity ammunition.  Such ammunition can be hard on the rifle and in larger calibers can even become unpleasant to shoot.  Because of the prohibitive cost of such ammunition but also sometimes because of the recoil, many hunters do not practice enough to make themselves good marksmen.  Reloaders (using researched data tables from books) have direct control over velocities, chamber pressures, recoil and ballistics.   In the past reloading publications contained more information dedicated to what are properly called “light” loads.   Light loads are much more pleasant and economical to shoot in a high powered rifle.   Light loads typically involved using cast lead bullets that are driven at lower velocities to prevent deforming the bullet or the smearing lead along the sides of the barrel.  Typically the bullets were cast from hardened lead (alloyed with tin and antimony) and swaged (squeezed to size) with alox (waxy lubricant) and stamped with a brass or copper gas check on the base.  Well, “ultralight” ammunition is lower in velocity still and you will not find its reloading information in any modern handbook.  For just a trifling cost ultralight ammunition can turn a loud and imposing hunting rifle into a more recreational device that is pleasant to use and much less noisy.


Above is an image of a .30 caliber cartridge loaded with a #1 Buck shot (7.62mm / .30 cal) lead ball.  Just a tiny bit (1 to 3 grains) of fast pistol powder is used as propellant in the freshly primed case.  The ball is seated into the case mouth by hand pressure alone, excess metal being easily sheared off by the brass case mouth.  A little cotton or other fluff can be inserted over the top of the powder to keep it down close to the primer in such a large casing.  Since repetitively measuring such a small amount of powder is impractical a home made powder scoop can be made from a spent .22 rim fire case.  A .22 short powder scoop with a piece of wire or paper clip soldered on as a handle, will throw about 2.2 grains of “Bullseye” powder.   A similar scoop made with a .22 long rifle case would throw about 3.0 grains of Bullseye.   Slower burning pistol/shotgun combination powders like “Unique” and “Hi-Skor 700x” will also produce fine ultralight loads.

Ultralight loads can be produced for any pistol or rifle caliber.  Their purpose is to act as pleasurable short range, low danger and low noise “plinking” ammunition.  The whole point of ultralight loads is lost if too much propellant is applied, because accuracy will fail.  Modern rifles have barrels with high rates of rifling twist, which is needed to stabilize jacketed bullets at very high velocities.  The old flintlock rifles used two centuries ago had a very slow rate of rifling twist (something like 1 turn in 66 inches) which was needed to properly stabilize or impart spin to a patched round ball.  Because a round ball makes so little surface contact with the rifling, the velocities of ultralight loads in modern rifles need to be kept very low.  Ultralight cartridge loads are much less lethal than regular ammunition but they are not “toy” loads and should be treated with respect.


A scoped deer rifle can perform fairly accurately with ultralight ammunition at a distance between 50 feet and 80 yards.  The line of sight from a gun site or scope is a very different path from the flight path or trajectory that a bullet takes.  The bullet actually crosses the line of sight twice; once at a close distance and then again downrange depending upon the elevation of the sights.  For a typical high-powered and scoped hunting rifle that is ‘dialed in’ for accuracy at 300 yards, a normal projectile might first cross the line of sight at 100 feet.


In the shot chart above one might notice that the buckshot size “BB” is actually larger than the Air Rifle shot – BB (normally .177 cal).  Chilled bird shot is usually dropped from height in a shot tower, whereas larger buckshot pellets are cast from molds.  The standardized sizes of round balls increase of course.   Caliber is the inside dimension of a rifle barrel.  For instance a 22 caliber rim fire cartridge has a bullet diameter of 0.22 inches (or 5.58 mm).  Gauge is the number of lead balls – of a particular bore size needed to weigh a pound.  For instance the nominal inside diameter of a 12 gauge shotgun is 0.73 inches (18.5mm).   Twelve lead balls of 12 gauge diameter weigh a pound.   Bore, like caliber is a measure of barrel diameter.   A 12 gauge shotgun could easily be called a 0.73 bore shotgun and a .410 shotgun has a bore diameter of 0.41 inch (it is not 410 gauge).  Cannons firing round shot were usually designated or described by the weight in pounds of a cast iron ball that fit the barrel.

Getting back to the previous buckshot image, #1 buckshot is .30 caliber size and will produce ultralight rounds in .30 cal firearms, including: 30-30 Win, 30 M1 Carbine, 300 Savage, 308 Win, 308 Norma, 7.5 Swiss, 7.62 Russian, 30-40 Krag, 30-06, 300 H&H, 300 Win Mag and 300 Weatherby Mag, not to mention several antique pistols chambered in .30 cal.  The British 303, Jap 7.7 and Russian 7.62 x 39 cartridges have a bit larger .311 bore diameter so if the # 1 buck does not satisfy then the #0 buckshot will accommodate a tighter fit.  Size F, FF and #2 round lead ball moulds are rare or do not exist, so the only common rifle or pistol calibers not matched to a popular buckshot caliber are the “22’s” and the .270 Win and .270 Weatherby cartridges.


* In passing, “double ought buck” is or was famous in American vernacular.   Because of this particular shot size, the balls stacked densely and efficiently within a 12 gauge shotgun hull.  Nine heavy #00 Buckshot pellets are commonly placed in a 2 ¾” – 12g shell, and 12 or even 15 will fit in a 3” shell.  This stacking is reminiscent of the way cannonballs were historically stacked in a pyramid, within easy reach of the cannons.  The bottom cannonballs might sometimes have been held in place by a brass tray with dimples, which might have been called a “brass monkey”.   The expression “freezing the balls off a brass monkey” might have originated as a vulgarity – rather than the popularized notion that in cold weather the brass tray contracted and released the cannonballs.   There seems to be little or no reference to “brass monkeys” in the historical record.

Galvanization, Electroplating & Anodizing

   Electrolysis is the act of driving a chemical reaction with DC current.  Inside a simple electrolytic cell, current usually passes through a conductive liquid to cause a chemical reaction to take place.   Electrolysis of fresh water is used to produce the pure gasses hydrogen and oxygen.  The important ‘chlor-alkali process‘ or  electrolysis of sea water might be preformed to produce chlorine and sodium hydroxide or even sodium hypochlorite or sodium chlorate.  If a solution contains metals then electrolysis can be used to isolate and to purify those metals or to coat an object with a metal.  This process is called electroplating or electrophoretic deposition.   A chrome automobile bumper and a galvanized item that was neither “hot dipped” nor “peen plated” are classic examples of commercial electroplating.  In both cases the coating metals (chrome or zinc) act to protect the underlying ferrous metal from the results of nature’s own electrochemical reaction, rust.  Anodizing as well is a process that uses electrolysis to increase corrosion resistance.  The body of an anodized aluminum flashlight for example, was dipped in an electrolytic cell in which the polarity is reversed from normal electroplating.  While normal electrolysis is often used to remove rust from antique artifacts, rust patinas can actually be deposited upon items like roof sheathing or automotive parts using the reverse polarity – anodizing process.  Galvanization, electroplating and anodizing are preformed daily by industry but can also be accomplished at home on a smaller scale.


   Shortly before the American and French revolutions two contemporary Italian professors were having a disagreement about what electricity was and where it came from.  In about 1771 Luigi Galvani  was dissecting a frog in search for its testicles.  Luigi picked up a scalpel and probed about, only to be startled when the dead frog’s leg twitched or convulsed.  He was to later attribute this reaction to some kind of “animal electrical fluid “.  Alessandro Volta  later argued that this physical phenomenon was caused by an electrochemical reaction from dissimilar metals.  Volta actually invented the first battery – the “voltaic pile” to demonstrate his idea and to disprove Galvani’s theory.  Volta later and perhaps sarcastically coined the term “galvanism” for direct current produced by chemical reaction.


The symbol for a battery is a stylized representation of a voltaic pile.  Volta’s battery was a stack of alternating copper and zinc disk separated by paper soaked in saltwater.  One of the most popular introductory scientific experiments for school children is to construct a voltaic pile using a stack of copper coins and aluminum foil – separated by paper and with an electrolyte of lemon juice or brine solution.

* An electrolyte is any substance that contains free ions – which makes the substance electrically conductive. Normally electrolytes are liquid but occasionally can be either gaseous or solid too.

* Another contemporary of Galvani and Volta was the well traveled Benjamin Franklin who spent several years in Europe as an American diplomat.  Franklin coined the term “battery” in 1748, before such a working device had been invented.   Franklin somehow associated a line of connected Leyden jars crated in a tray – with a line or battery of cannon protruding from the side of a warship.  In his famous kite and lightning experiment Franklin actually used ‘Leyden jars’ which were essentially capacitors.  Leyden jars had only been invented, independently, a couple of years beforehand (1745-1746) by a German and a Dutchman.  Leiden is a city in the Dutch province of South Holland.  The term “battery” properly applies not to one, but to a group of connected galvanic cells.


* In a simple galvanic cell chemical energy from the ionization of different metals is converted to electrical energy.  Conversely, in an electrolytic cell, voltage is applied to the electrodes and electrical energy is converted to chemical energy.  In a single cell of a rechargeable lead-acid (automotive) battery or a rechargeable NiMH / NiCD flashlight battery, the cell acts as a galvanic cell when discharging but as an electrolytic cell when being charged.  The difference between a “primary” battery and a “rechargeable” battery is that the chemical reaction is easily reversible in the latter. 

* Confusion exists between the terms anode and cathode.  In an electrolytic cell the anode is positive (+) and the cathode is negative (-).  Negative ions (anions) are attracted to anode; positive ions (cations) are attracted to the cathode.  In a voltaic cell however the polarity of cathode and anode are reversed.  This discrepancy is attributable to the theory that an anode should release electrons and undergo oxidization, whereas the cathode should undergo reduction.  

* Oxidation and reduction are opposite reactions that occur at the anode and cathode during electrolysis.   Rust is caused by iron giving and oxygen taking electrons – the iron oxidizes (looses electrons) and the oxygen is reduced (accepts electrons).  In the rust removing electrolysis process, water is oxidized at the anode, oxygen is produced, electrons flow to the anode and up to the power supply.  The negative terminal of the DC power source supplies electrons to the cathode where reduction occurs (water and rusty item accept electrons and lots of little hydrogen bubbles are produced). 

Acids like vinegar (acetic acid), hydrochloric or phosphoric acid can be used to remove iron oxide from rusty items.  Acids in this application however can be destructive to rusted antiques and artifacts and this is where simple electrolysis might be preferable.   Many carbonated soft drinks will remove rust to a degree also.  Some of the CO2 in solution turns to H2CO3 (carbonic acid) which reverses oxidation (reduction).  Some Cola (s) also contain phosphoric acid (H3PO4) which is an active ingredient used in the steel industry to ‘pickle steel’ or in the rust removing gel called ‘naval jelly’.

There are in essence, two kinds of rust.  Red rust exfoliates or expands.  Red rust (ferric oxide/ Fe2O3) is lost or unsalvageable but at least it can be loosened and freed up by the action of hydrogen bubbles in the electrolytic process.  Underneath the red rust is “black rust” (or magnetite / Fe3O4) which is still strongly bonded to the underlying metal.  Black rust can be reclaimed and reduced back to metallic iron.  Any deep pits beneath a badly rusted surface will not be filled up, but the surface magnetite will be stabilized – its molecules invigorated with new electrons to reform iron.  After drying off, rusty objects cleaned this way will quickly regain rust so therefore should immediately be protected with oil, paint, varnish or something similar.

* Using a ferrous metal sacrificial anode which contains stainless steel, copper, zinc or nickel might cause some plating of those metals onto the cathode.  Stainless steel contains chromium and should be avoided for use at the anode because poisonous gasses and dangerous chromates could be produced in the electrolyte.  This electrolysis process also produces small amounts of pure oxygen and hydrogen so the project should be preformed in a well ventilated area and any sparks or flame should be avoided.

   Since fresh water is such a poor conductor of electricity it needs to be amended with other substances before it will conduct, and therefore participate in electrolytic ion exchange.  Acid solutions or salt water would work, but these are corrosive and less desirable than using alkaline solutions for a rust removing electrolyte.  Any base like sodium bicarbonate (baking soda), sodium carbonate (soda ash), potassium carbonate (potash), sodium hypochlorite (laundry bleach), sodium hydroxide (lye or caustic soda) or perhaps even ammonia can be added to water to make a good electrolyte.  The concentration of the base in the liquid is apparently not critical, anywhere between 2% and 10% should do.

The DC power can come from almost any source depending upon the size of the project.   Flashlight batteries, car batteries, power supplies salvaged from old PCs, automotive battery chargers and even DC arc welders have been used.   Where delicacy is called for when restoring a historical artifact about the size of a horseshoe, a 12v or 6v battery charger even on its lowest setting might produce too much amperage.  To place denser, less porous deposits on small cathodes (items) a slower working current of perhaps 200mA @ 12 V is more appropriate.   For larger items more amperage can be applied; more amperage means faster electrolysis.  DC arc welders work at about 40 volts and put out somewhere between 75 and 185 amps to make an 1/8th inch rod melt.  Although they produce high amperage arc welders don’t put out enough voltage to pose a life threatening risk from electrical shock.

There is no point to placing a clean unblemished piece of iron or steel into such an electrolytic cell because nothing will happen when using this simplistic type of electrolyte.  Unlike electroplating gold onto a coin or jewelry base metal, this example of electrolysis using a mild alkaline electrolyte will not properly deposit new iron onto the cathode.

* As a footnote, electrolysis is important in industry because valuable chemicals and metals can be produced, isolated or concentrated including:  aluminum, copper, lithium, sodium, potassium, magnesium, chlorine, sodium & potassium chlorate and sodium hydroxide.  Fuel cell cars of the future would depend upon hydrogen, largely from production by electrolysis.  Modern nuclear submarines are capable of remaining submerged for extended periods because electrolysis can be used on water – to extract pure oxygen, which is then mixed with “scrubbed” and recycled air.

*  There is another definition of “electrolysis” which is concerned with hair removal.  This association might have been made because doctors, beauticians and cosmetologist originally tried to kill hair follicles with electrified tweezers.  Today it is more common to see tiny laser beams being used to permanently remove hair.

Anodized: Google free to use or share filter.

Google free to use or share filter.

Reverse electrolysis can be used to put rust on ferrous metal.  As with anodizing the very same electrolytic principle is at work but the polarity of contacts is reversed.  Rusty patinas can be deposited upon shiny new automotive or mechanical parts to make them match the look of old engines or original machinery.   Rusty patinas can be put on new corrugated galvanized roofing metal to make the roof of a house or barn look more ascetic.  Anodizing as well employs reverse electrolysis and gets its name because the object to be modified works as the anode in the electro-chemical reaction.  Aluminum especially but titanium, zinc, magnesium, niobium and tantalum are examples of metals that can be anodized and benefit from the oxidization process.  Aluminum normally “self passivates” or builds up its own microscopic oxide surface layer very quickly, which protects it from further corrosion.  Chrome, magnesium, titanium and zinc do the same thing.   In the act of oxidizing these metals rather than trying to plate them, a corrosion resistant protective layer is created on the surface that prevents further oxidization.  The anodizing process actually makes the surface tougher, less conductive, more porous than normal and therefore capable of absorbing dyes or holding paints and lubricants longer.

Magnesium is often anodized in dichromate solutions.  Titanium can be anodized in phosphoric acid (remember cola soft drinks) or an alkali solution like trisodium phosphate (TSP / dishwasher detergent).  Zinc is rarely anodized but here again a solution of phosphoric acid might produce good results.  Aluminum which is frequently anodized is perhaps best preformed upon using a mild sulfuric acid electrolyte.  Colored dyes like those from the ink in a permanent marking pen can be used to tint a freshly anodized object.  A protective coating of lacquer or plastic polymer will further protect the dye.. <Long winded but thorough video on simple anodizing.>

Galvanization spangle   Google image

Galvanization spangle
Google free to use or share filter.

  Galvanization is the process of putting a protective coat of zinc on steel (normally).  When steel is galvanized, oxygen reacts with the zinc to form zinc oxide and then that reacts with carbon dioxide to form a very corrosion protective zinc carbonate surface.  There are three common ways to apply a galvanized coating: electroplating, hot dip and mechanical plating.  A piece of steel that has been electroplated sometimes shines like stainless steel, but is less magnetic.  A piece of steel that has been hot dipped usually has a dull mat-grey surface, exhibiting a crystalline spangle.  “Galvanization” should refer to an electrochemical process (considering Luigi Galvani’s name) but “hot-dip galvanization” is not an example of electrodeposition.   Instead, steel is dipped into a molten bath of zinc (@ 860 ºF apx.).  This creates a metallurgical bond with the underlying metal and microscopic layers where the zinc and iron are alloyed.  A third type of galvanization first appeared in the 1950’s.  Mechanical plating (or peen plating, mechanical deposition, or impact plating) essentially puts small items into a large tumbler and beats them around in the presence of glass or ceramic beads, water and a zinc powder.  Many nuts, bolts, washers, springs, clips, nails and screws are treated in this fashion.   Each of the galvanization process has advantages and disadvantages.  Electro-galvanization might be applied to automobile bodies and pneumatic nails and brads but the coating is not as robust nor as protective as a hot dip treatment.  The hot dip process might weaken the strength of the base metal or fill up the screw threads of a nut and bolt.  Impact plating only works well for smallish, non flat items.

Zinc. Courtesy Heinrich Pniok via Wikipedia. © by Heinrich Pniok ( license:

Small scale electro-galvanization is easily achievable at home.  Parts to be plated should be washed with soap and water and then perhaps etched in a dilute muriatic (hydrochloric / swimming pool) acid bath.  A useful zinc electroplating solution can be made from the zinc oxide powder 2 parts and lye (sodium hydroxide) 10 parts for every 128 parts of water.  A splash of glycol (as in automotive antifreeze) supposedly makes a brighter finish.  Another zinc electroplating solution consist of about 3 parts zinc chloride and 12 parts ammonium chloride with 100 parts water (by weight).

A pure zinc metal anode might be a little hard to come by.  Usually alloyed with other metals, zinc is not rare but neither is it used often in a pure state.  Scraps of hot plated galvanized metal could serve temporarily as anodes.  Again, when electroplating the positive (+) wire goes to the zinc anode and the negative (-) wire to the item to be zinc plated.

*  Since 1982 American pennies contain 97.5% zinc but the slugs are electroplated with copper (2.5%) before they are stamped.  Incidentally, by virtue of its intrinsic metal the most valuable U.S. coin in common circulation -  is the nickel.

Plating applies a deeper more durable layer than simple deposition does.  Electroplating a thick layer of a particular metal onto the cathode generally requires a sacrificial anode of the desired metal and an optimized electrolyte containing salts of that metal suspended in solution.  Oftentimes these optimized electrolytes are very caustic or poisonous.  Quality, industrial grade electroplating is outside the realm of home experimentation not only because of the cost but because of the toxicity of the chemicals involved.  There are small commercially purchasable “kits” available for electroplating copper, gold, silver, nickel, brass, chrome, black chrome and zinc-cadmium.  It must be presumed however that these consumer grade kits are less effective for electroplating than are those cyanide processes that industries are regulated by license to use.

Copper plating can be achieved in an acid sulfate, alkaline or cyanide bath.  Cupric sulfate (copper(II) sulfate or CuSO4) is easily made or is purchasable from an Arts & Crafts store or in the form of a solution poured into septic lines to kill tree roots.   Using cupric sulfate as a copper plating electrolyte works to a degree but the end result will be a thin, duller and less bright finish than when using an industrial cyanide solution.  Adding soap or glycol to the safer cupric sulfide electrolyte solution might help improve the luster of the plated object.   A simple bath in copper sulphate (no current) of somewhat reactive metals like aluminum and zinc will cause deposition of some copper upon those metals.

* Cupric sulfate is also used to kill and control algae in swimming pools or aquariums, to dye organic fibers and to etch zinc plates in one form of printmaking.   Relatively benign but somewhat toxic it can irritate the eyes and skin and should not be swallowed.  Copper(II) sulfate can be made industrially by soaking copper in hot concentrated sulfuric acid or as the image below shows; by a hobbyist using a battery charger, two electrodes of copper and a sulfuric acid electrolyte.

attribution: Dmwdev

attribution: Dmwdev

* Nobel metals are less reactive than most other elements and are resistant to corrosion and oxidation.  Nobel metals include: gold, iridium, osmium, palladium, platinum, ruthenium, rhodium, and silver (by some accounts mercury, rhenium or copper might be included within this list).  The main eight noble metals are also “precious” but precious metals by definition are rare, valuable and occur naturally.

Cyanide is compound built upon a carbon and a nitrogen atom where the two have been triple-bonded to each other.  Extremely important to industry on several levels, not all cyanides are toxic.  Algae, fungus and bacteria produce some types of cyanide but it is the cyanide compounds that release negative cyanide ions (CN-) that are dangerous.  Both sodium and potassium cyanide act like solvent upon the noble metals and they have been used historically to dissolve, leach and extract gold, silver and copper from ore.  It is their high affinity for metal that makes sodium and potassium cyanides so valuable to the mining process or for electrolyte solutions in the electroplating process.  Both are created by treating their hydroxides with hydrogen cyanide (ex: hydrogen cyanide + lye (sodium hydroxide) = sodium cyanide …. hydrogen cyanide + caustic potash = potassium cyanide).

Over 2.02 billion pounds of hydrogen cyanide was produced and used in the U.S alone in 2003.  The chemical is important for pharmaceuticals, plastics, acrylics, leather tanning and other things.  Hydrogen cyanide (HCN or prussic acid) is primarily produced by the Andrussow Process where methane and ammonia are reacted in air over a platinum catalyst at high (1100° C) heat, but there are several other ways to produce it.   Potentially lethal, HCN and some other cyanides are some of the fastest acting poisons known to man and can kill by blocking electron transport, therefore stopping respiration.   HCN was the most lethal chemical gas used on the battlefields of WWI and has been used to execute both rodents and people for more than a couple of centuries now.  Although very much ‘on the radar’ of organizations like the EPA, cyanides used in mining operations do not have a high persistence and those exposed to sunlight degrade fairly rapidly.

Google free to use or share filter.

Google free to use or share filter.

   Chromium is a metallic element (Cr, atomic # 24) that is very hard and corrosion resistant.  Chromium has a very high (3465 °F / 1907 °C) melting point that is higher that that of both kaolin and platinum.  Stainless steel contains some chromium.  Although not officially identified until 1798 it appears that about 2,000 years ago Chinese metallurgist were dipping sword points and arrowheads into chromium.  Prior to the 1920’s chromium was pretty much only used for paint pigments or for leather tanning solutions.  Clock parts and shiny car parts were primarily electroplated with nickel before chromed parts arrived on Ford and Chevrolet automobiles around 1926.

Aside from regular steel, chrome plating can be applied to stainless steel, brass, copper, aluminum and even conductive plastic.  Scratch removal and thorough cleaning are critical in chrome plating preparation.   For steel, usually a layer of copper and then nickel are first deposited before the chrome layer even goes on.  The quality of these two layers is important because chrome is brittle and its layer is usually quite thin.  The most lustrous, most reflective and attractive chrome plating also involves using the most toxic process – called hexavalent chromium plating.   In a hexavalent chromium bath carcinogenic sodium chromate (Na2CrO4) or dichromate (NaCr2O7) are suspended in hot sulfuric acid to make the dangerous electrolyte called chromium trioxide (or chromic anhydride).   Listed as a “priority pollutant” by 1977 in the U.S., every drop of this chromic acid is accounted for and tracked.   Automobile manufacturers in the EU stopped using “hex-chrome” in 2006.

The name “hexavalent” refers to the chromium molecule being in its +6 oxidization state.  Likewise the name trivalent reflects the valence of chromium.  “Tri-chrome” or trivalent chromium plating is a newer, alternative chromium plating technique.  Tri-chrome uses a far less toxic electrolytic bath and sophisticated anodes but the process is even more complicated and the finish sometimes inferior to Hex-chrome.

Although used as a precursor to chrome plating, nickel plating may not be as widespread in industry as it once was.  Simple nickel plating can be accomplished by an amateur using a salt, acid or alkaline electrolyte bath.  Of the various combinations published for nickel electrolytes, the nickel sulfate, nickel chloride and boric acid mixtures (together in unison) seem to be the most mentioned. <Video of simple nickel plating.>

Electrolysis, galvanization, electroplating and anodizing all fall within the realm of arts and crafts or home fabrication.  The quality of the resulting work will sometimes be determined however by the cost and dangerousness of the materials applied.  Copper, gold, platinum and silver plating for example might require costly ingredients and potentially hazardous hydrogen cyanide derived electrolytes. The complications of quality chrome plating remove this process from the general capability of the private individual.


Added: Oct/12/2013

PSU / copper sulfate

Industry standard colors for wires coming out of the back of an ATX type PSU (Power Supply Unit) like this one are:  red > for + 5v DC, yellow > for + 12 v DC and black > for ground.   A few other colored wires (orange, blue, grey or green) were unimportant and therefore isolated and tucked away, back inside the box.  Computer PSUs like this are “switchmode” (or switching mode) power supplies which means they might need a “dummy load” to work correctly.  Rather than use a 10 ohlm/10watt wire wound load resistor that some sources suggest, this jury rigged example uses a little automotive light bulb.  Housed in a red, trailer running type light fixture, the bulb provides both a power on indication and a dummy load for the switching mode power supply.  One red (+5v) and one black (GND) wire are connected to the bulb.  The black circular receptacles were originally speaker wire terminals scavenged from the back of some old speakers.  On the left, some extra yellow and black wires await attachment to an automotive type cigarette lighter receptacle.  Any accessory that normally plugs into a car’s cigarette lighter could easily be powered by 12 volts from this PSU.

In this picture a jar of blue, recently homemade copper sulfate sits in the background.  Two copper electrodes dipped in a solution of about 15% sulfuric acid / 85% water turned a clear solution to that blue color in a couple of hours using 5v DC.  The solution which contains copper ions will be used to test some simple copper electrophoretic deposition on different metals.

Currently working in the coffee cup is a steel nail and a rusty razor blade that has been temporarily pulled up out of the solution.  The electrolyte in the cup consists of ½ cup water and 1 tsp laundry bleach.  White bubbles from the razor cathode and brown rusty flotsam from the sacrificial nail anode obscured the originally clear electrolyte rather quickly.