Antennas (simple radio #2)

* Note to self:  The time for a new post is long overdue but it is not as though I haven’t had other distractions to keep me occupied.  Last week for example I had to chase the same bear out of camp three separate times during the night.  The next morning it was determined that the bear had confiscated a roll of sausage, a stick of butter, a box of cookies and a bag of marshmallows.


Generally, any antenna that is used to receive RF (Radio Frequency modulation) is capable of adequately transmitting that same RF.   Sprouting from the Italian word for the longest or central tent pole supporting a tent, “antenna” entered radio vernacular sometime after 1895 when Marconi (camping in the Alps) supported his radio’s aerial from the pole.   Aerial and antenna are usually synonymous and both are simply transducers or implements which convert one type of energy into another.   The word “aerial” however is sometimes used to refer to only a rigid vertical transducer.

* Antennae is a seldom used plural form of the noun – antenna, and might most frequently be encountered when discussing bugs.  Depending upon the type of insect, antennae might be used to feel, hear, smell, or even to detect light.  Apparently male mosquitoes employ their antennae to hear female mosquitoes from as far as ¼ mile (400m) away.

Radio antennas are thought of as being directional or omni-directional.   A directional antenna will prefer to radiate in, or receive from one direction more than it will in any other.   A vertical rod or isotropic radio tower supposedly radiates in all directions equally.  No aerial is perfectly isotropic (omni-directional) however.   In the case of a vertical tower there is a blind cone or null lobe straight up and another straight down where radiation is not sent or where reception is absent.   In the same fashion, there is no antenna that is perfectly directional.  A pictorial depiction of a directional antenna’s radiation pattern usually shows particular zones as being elongated lobes.  There are main lobes, back lobes, side lobes and null lobes of radiation pattern.

  Gain is a concept unique to directional antennas and is a measure of efficiency.   Gain is the ratio of a directional antenna’s intensity relative to that of a hypothetically ideal isotropic antenna.  A low-gain antenna sends or receives signals partially from several directions while a high-gain antenna is much more focused.   Both types have their advantages.   A high-gain antenna may need to be carefully aimed or pointed towards its target, to work.  That achieved, a high-gain antenna has a longer range than a low-gain type.   It’s a “conservation of energy”; less energy is wasted by radiating in useless directions.   Modern household satellite dishes for TV reception are examples of high-gain antennas.   Antennas on cell phones and Wi-Fi equipped computers however are low-gain types, which enables them to receive signals from many directions.


The parabolic shaped antennas used for satellite TV and radars, are usually associated with microwave frequencies.   The first parabolic antennas were constructed however, over 120 years ago when Heinrich Hertz used them to prove the existence of electromagnetic waves.   The dish or parabolic shaped element can be made of mesh, wire screen, sheet metal or mirror.   The dish is only a passive device; a reflector that collects signals and bounces them towards the active (cable connected) feed.   Monstrously huge parabolic antennas are used for radio telescopes.   Radio telescopes can be used to determine the composition of molecular clouds in space because when excited, individual molecules rotate at discreet speeds and emit radio energy as they do so.   Carbon monoxide likes to emit at 230 GHz for example.   These telescopes can be used to study all sorts of things:  black holes, radio-emitting stars, radio galaxies, quasars, pulsars, gamma ray burst, super novas and so on.   They can be used to track satellites, do atmospheric studies or to receive radio communications from distant traveling spacecraft like Voyager 2.

*  The VLA (Very Large Array) radio astronomy observatory is located in a remote area of N.M., just east of Pie Town, N.M.  The array is made of 27 independent parabolic dishes that stand about 10 stories high (82’or 25m) and are visible from space as little white dots.   Each independent dish weighs 209 metric tons (2,205 lbs x 209) and is mounted on a robust rail system (doubled – two parallel sets of standard gauge tracks) so that it can be moved.  The rails are configured in a “Y” shape.  To focus on an object or area in space the 27 dishes expand from a minimum of 600m at center to a maximum baseline radius of 22.3 miles.  These antennas can listen to a large chunk of the radio spectrum (from 74 MHz to 50 GHz / wavelengths 400 cm to 0.7 cm).  Computers are used to correlate the data from each dish into a single map; the VLA observatory itself is called an “interferometer”.  Occasionally the VLA is brought online to link with other radio telescopes around the country to form an even larger (5,351 miles long) baseline called the VLBA (Very Long Baseline Array).  These other antennas are located in Brewster, WA, Kitt Peak, AZ, Los Alamos, N.M, Owens Valley, CA, Fort Davis, TX, North Liberty, IO, Hancock, N.H, Mauna Kea, HI, and St. Croix, U.S. Virgin Islands.  On occasions when radio telescopes in Arecibo, Puerto Rico, Green Bank VA, and Effelsberg, Germany join in the whole affair is called the High-Sensitivity Array.  


Phased array radar antennas like the flat panel above actually house many small evenly spaced aerials.  The phase of the signal to each individual aerial is logically controlled, resulting in a collective beam from all the little aerials that can be amplified and focused in a specific direction almost instantly.   Quicker and more versatile than mechanically rotating antennas because they require no movement, phased arrays are also more reliable and require little maintenance.   Limited phased array radars have been around for 60 years but recent improvements and affordability in electronics has made them more commonplace.   Most new military radars being built today are phase array systems.   

* RADAR is an acronym coined during WWII by the U.S. Navy, from “Radio Detection And Ranging”.  Before that however, the British were calling the same thing RDF (Range and Direction Finding).  The most common bands used for radar are microwave bands (at the upper end of the radio spectrum between 1 GHZ and 100 GHz – the L, F, C, X, Ku, K  and Ka bands).  Radars used for very long-range surveillance however might use longer VHF frequencies starting at 50 MHz or UHF frequencies between 300 and 1,000 MHz (1 GHz).  


Omitting the simple aerial, some commonly encountered antenna shapes are shown above.  The most basic antenna type perhaps is a “quarter wave vertical”   (where the length of the aerial is ¼ of the wavelength targeted).   The simplest and most commonly encountered antenna however is probably the “dipole” antenna.   A dipole antenna is essentially just two elevated wires, pointing in opposite directions.   A dipole is fairly omni-directional unless its axis is parallel to the target emission.  A monopole antenna is formed when one side or one half of a dipole is replaced with a ground pane that is perpendicular or at a right angle to the remaining half.   A whip antenna correctly installed on a car for example, uses reflected radiation from the automobile’s body (the ground plane) to mimic a dipole.  In this instance the monopole will have a greater directive gain and a lower input resistance.

Grounding provides a reference point from which changes in waveform can be detected.  A radio tower that is constructed to transmit at AM frequencies for example must be grounded or be compensated for lack of ground, and its height or length of element is determined by the wavelength.  Certain ground soils allow good grounding to earth but others do not.  In the absence of a good ground an antenna can simulate a ground by adding drooping radials (additional elements hanging at 45°).  A typical Marconi antenna is a perpendicular ¼ wave aerial with a proper ground (perhaps the soil is moist, marshy, full of iron ore or otherwise conductive).  In this case the ground acts to provide more signal, adding the missing quarter to mimic a full half wavelength antenna.   Often two or more quarter wave antenna towers will be seen in the same vicinity.  Usually a group of similar towers like this is creating a directional array that transmits greater power in a certain direction.  Since AM broadcast (US.) wavelengths range between 1,826 ft. and 909 ft. in length it would be prohibitively expensive to erect a desirable full length or even half length vertical transmitting tower to hold up the element.  For economic reasons some large transmitting antennas therefore are laid out and polarized in the horizontal plane. 

The folded dipole is a variation of the simple dipole.  Folded dipoles are about the same overall length as a standard dipole but provide greater bandwidth, have higher impedance and can often provide a stronger signal.

  Loop antennas are generally used to conserve space.  The old TV set top “rabbit ears” often incorporated a loop in addition to the two telescoping, adjustable dipole elements.  Loops respond to the magnetic field of a radio wave, not the electrical.  A loop induces very small currents on each side of the loop and the difference between the two must be amplified usually, before any useful signal is fed to the receiver.   Loop antennas are very inefficient.  One useful property of the loop however is that is very directional, they pick up signals when positioned in one axis, but not another.  Most direction finding radios incorporate a loop antenna.   A loop by itself can determine the axis of a signal’s radiation but not forward from backward.   Direction finding radios were/are used in aircraft and boats or ships at sea to navigate with.  Modern civilian aircraft usually have an ADF (Automatic Direction Finder) box that is attached to a loop and sensing antenna combination.  In earlier days the loop was manual (turned by hand) and not automatic.  The non-directional, sensing aerial on a small aircraft might be a simple wire running from the tail, forward to the cabin.   The ADF’s electronics compares the two antennas (directional and omni-directional) to determine the signal’s phase (+/-) and therefore forwards from backwards.

Loopstick antennas (using ferrite rods) found in many small AM radios are actually examples of loop antennas.  Today “DX-ers” and radio hams might construct a shielded loop antenna, wrapping hundreds if feet of wire onto a spool.  Such an antenna would have the advantage of containing a half-wave or even a full-wave element in a small space, but it would be directional and introduce a new set of technical complications.

The Yagi- Uda antenna was invented by two Japanese scientists back in the late1920’s.  Early airborne radar sets used in WWII night fighters used Yagi antennas and were employed by almost everyone except the Japanese.  Yagi antennas have several parallel elements, some active (directors) and some not (reflectors).  The unconnected multiple elements help to improve gain and directivity.  The illustration shows a horizontally polarized, dual band antenna, once popular for analogue TV reception.  The whole thing is a combination of three separate Yagi antennas.  The longer elements are for VHF reception.  The shorter, closely spaced elements on the left half of the antenna were for UHF reception.  The shortest elements on the straight tail are directors and reflectors that act to improve the UHF gain and directivity.  The next longest elements (mounted on the vertical “V”) are UHF half-wave dipoles.  The longest elements on the right would be half wave dipoles, arranged in a “phased array” to pick up multiple channels.  Wavelengths of the FM and VHF TV bands are somewhere between 11’ and 9’ long.  The longest single element in this example would be about 5.5ft.

* Beware of salesmen selling snake oil.  There is no such thing as a digital TV antenna.  An antenna does not care how the wave is modulated; it does not distinguish between analogue and digital signals.  

* Although as of 2009 UHF TV is gone in the US., someone else will now transmit in those UHF bands (probably AT&T or Verizon).  The front half of these old antennas are still good useful for FM and HDTV reception if a local broadcaster is still transmitting on his legacy bandwidth.  The FCC is eager to grab this bandwidth and sell it to cell phone companies.  

Horn shaped antennas are commonly used at UHF and microwave frequencies.   Parabolic antennas (where the dish itself is just a reflector) often use a horn as the ‘feeder’.   Advantages of horn antennas include simplicity, broad bandwidth, fair directivity and efficient standing wave ratios.  A few large horn antennas were built in the 1960’s to communicate with early satellites or for use as radio telescopes.

Big & rare

Up until 2010 when a certain skyscraper in Dubai was completed, the tallest manmade structure ever built was a half-wave radio mast.   Standing at 646.38 m (2,120.6 ft) above the ground and perched upon 2 meters of electrical insulator, this tower broadcast longwave radio (@ 227 kHz and later 225 kHz) to all of Europe, North Africa and even to parts of North America.   It was used by Warsaw Radio-Television (Centrum Radiowo-Telewizyjne) from 1974 until it collapsed in 1991.

The notorious ‘Woodpecker’ radio signal interfered with the world wide commercial and amateur communications and international broadcasting stations for about 13 years.  Transmitting with about 10 megawatts of power from an antenna that was about 50 stories high and 1/3 rd of a mile long (150m tall x 500m wide) the original Duga-3 antenna was nicknamed “Woodpecker” for the interfering sound  that it made.   It was using protected frequencies set aside for civilian use.   Operating from 1976 to 1989 the Woodpecker now resides within a 30 kilometer diameter region of exclusion surrounding the Chernobyl power plant.  The Chernobyl disaster occurred in April 1986 but apparently the Woodpecker continued to operate for another three years.

Their has been varied speculation about the purpose of the Duga-3 broadcast, including intentional broadcast interference, mind control experiments and weather manipulation.   These speculations are not without precedent.   The most plausible explanation of the Woodpecker signal however, is that it was simply a Soviet over-the-horizon radar (OTH) intended to detect ICBM’s at long range by bouncing itself off the ionosphere.  Apparently the Woodpecker was arrayed with other OTH systems like Duga-2 (also in the Ukraine) and a second Duga-3 built in eastern Siberia which points toward the Pacific.

A couple of videos filmed at this antenna which should provide an appreciation for scope and scale.

Climbing up the Russian Woodpecker DUGA 3 Chernobyl-2 OTH radar

Base jumpers sneaking into the ‘Zone of Alienation’ to jump from the antenna.


* During the ‘Cold War’ the term “International broadcasting” described broadcast pointed at or intended for foreign audiences only.   For 60 years now, RFE/RL (Radio Free Europe (RFE) and Radio Liberty (RL)) have been spreading anti-communistic propaganda and psychological warfare behind the ‘iron curtain’ using shortwave, medium wave and FM frequencies.  It would stand to reason that the Soviets might have wished to retaliate or block such popular broadcast.   Although mind control by radio signal seems very far-fetched, the Soviets are accused of having for many years focused microwave radiations toward the U.S. embassy in Moscow.   If not for mind control, perhaps the Soviets then were attempting to slowly cook the Americans.   Weather manipulation using radio is theoretically feasible and supporting information will be included shortly.

Extremely low frequency (ELF) is an electromagnetic radiation range with frequencies from 3 to 30 Hz and wavelengths between 100,000 to 10,000 kilometers (62,137 miles to 6,213 miles) long.   Since ELF frequencies can penetrate significant distances into the earth and seawater they have been used by the U.S., Soviet/Russian and Indian navies to communicate with submarines at sea.   The British and French apparently also apparently constructed and experimented with ELF antennas.   Because of the extreme wavelengths, sending antennas need to be very large and the few examples that do exist are buried in the ground.  ELF transmissions were or are limited to a very slow data transmission rate (just a few characters per minute) and are usually just one way transmissions due to the impracticality of a submarine perhaps being able to trail an aerial behind it which was long enough to send a reply.   The U.S. Navy transmitted ELF signals between 1985 and 2004 from one antenna located in the fields of Wisconsin and another located in Michigan.   Due to environmental impact concerns involving everything from farmers concerned over their livestock’s behavior to disoriented whales beaching themselves en masse, the U.S. Navy abandoned its ELF effort.  They use something better now anyway.

* Miners and spelunkers can use technology called through-the-earth communications which utilizes the (higher than ELF) ultra-low frequency (ULF) range between 300–3,000 Hz.  

Plasma is conductive, ionized air or gas.  Using an array of antennas attached to powerful radio transmitters ionospheric heaters are used study and modify plasma turbulence and to affect the ionosphere.   Several of these ionosphere research facilities already exist (in Norway, Russia, Alaska, Japan and Puerto Rico) and are operated organizations like SPEAR (Space Plasma Exploration by Active Radar), EISCAT (European Incoherent Scatter Scientific Association) and HAARP (High-frequency Active Auroral Research Program).   By heating or exciting an area of the ionosphere, air can be made to rise or to act as a reflector from which other radio transmissions can be bounced.  Theoretically then ionospheric research could, should or already does allow for enhanced radio communications, surveillance, long distance communications with submarines, weather modification and perhaps eventually even the transport of natural gas from the artic without the use of pipelines.  The feasibility of altering the course of the jet stream or of steering the course of a hurricane seems very real.  Readers wishing to learn more about this subject can find some information on the Internet.   They could start by following these two links:

Ionospheric Heaters Around the Globe – HAARP isn’t Lonely 

Weather Warfare 


Nomenclature in the world of knots is inconsistent in any language.  Within English some would stipulate that the tangles of cordage we commonly call knots should actually refer to only those things that are neither bends nor hitches.   Ideally a bend should join two ropes or lines together, whereas a hitch should attach a line to a post, ring, rail or something.  In general however, the term knot is used to encompass all three.


Some fundamental knot component terms include “working or tag end”, “standing line”, bight and loop.  In a bight the end and the standing line are parallel but in a loop the working end crosses over the standing part.  Other knot terminology might include: braids, bindings, coils, dog, elbow, friction hitch, lashing, lanyard, locking tuck, messenger, nip, noose, round turn, plait, seizing, sling, splice, stopper, trick or whipping.  A knot that has a draw loop is said to be a slipped knot, which is not the same thing as a proper slip knot.  When tying shoelaces for example two draw loops or bights finish the knot and provide easy untying.


The simplest knot of all is the “Overhand knot”.  Once tied in a line of rope or cordage, every knot reduces the static tensile strength or average breaking strength of that line, when tension is applied.  The proportion of knotted cordage’s breaking strength relative to its unknotted strength describes a given knot’s “efficiency“.  Efficiency is about the only common, measurable, descriptive term shared between knots, bends and hitches.  Most knots have an efficiency between 40% and 80%.  The overhand knot (ABoK#514) has an efficiency rating of 50%, which is poor because when stressed it reduces the strength of a line by half.

Several knots we are familiar with are ancient.  Long ago prehistoric fishermen were using knots to make gill, casting and trawling nets. In addition to practical knots, the ancient Tibetans, Chinese and Celts contemplated some very intricate and elaborate decorative knots.

There is by no means an authoritative categorization or listing of all knots.  Growing in acceptance, the closest thing to an authoritative list of working knots might be Clifford W. Ashley’s illustrated encyclopedia of knots.   First published in 1944, The Ashley Book of Knots list and numbers more than 3,800 basic knots, but this does not even come close to enumerating all the variants and ornamentals in existence.  There is a lively online forum on almost every subject related to knots – hosted by the International Guild of Knot Tyers.  Also there is a quick and handy online knot index which features images for some of the more common working knots.


* A tangential detour: Knot Theory

Lest the reader assume that knots are an overly simplistic or entirely trivial subject they should realize that the future advancement of computing may rely upon an underlying study of knots.  The speed of the fastest computers is approaching a limit due to the finite speed of the electron itself.  Any increased computing speed in the future may depend upon quantum field theory and statistical mechanics; mathematics that sprouted from a topology known as “knot theory” or the mathematical study of knots.  Knot theory is often applied in geometry, physics and chemistry. Topology is concerned with those properties that don’t change when an object is continuously stretched, twisted or deformed.  Topology involves set theory, geometry, dimension, space and transformation.  Topology studies spatial objects (objects that occupy space), the space-time of general relativity, knots, fractals and manifolds.  A mathematical knot is one where the ends are joined together to prevent it from becoming undone.  Inspired by real world knots, the founders of knot theory were concerned with knot description and complexity.  They created tables of knots and links (knots of several components entangled together).  Over 6,000,000,000 knots have been tabulated to date and obviously concise tabulation would be a task for a machine and not a human.


free to use or share filter



A surprising number of people are unfamiliar with or cannot tie a decent knot, when such a skill can occasionally prove to be quite handy.  A repertoire of only a dozen or so well chosen knots will stand the survivalist or Boy Scout in good stead with his contemporaries.  An effective working knot should have practical applications, it should be simple to tie and easy to remember and in most instances it should be easy to untie.  My subjective list of six of the most important and effective working knots include the slipped -slipknot, bowline, figure -8 (or Figure of Eight Loop), clove hitch, prusik knot and the trucker’s hitch.   The clove hitch and prusik knots are fundamental in that several useful variations have been built upon them.


The simple slipknot tightens as the hauling end is pulled and can become very tight and difficult to untie.  By “slipping” the knot with a bight or draw loop however, even the tightened knot will fall apart after a stout yank of the tag end.  This simple knot is appropriate in many applications including tying a hammock to a tree or fastening a horse halter to a post or rail so that it can be unfastened quickly in an emergency.


Many knots including the venerable bowline can be “slipped” in such a fashion.  For those people who encounter a mental block when trying to remember how to tie a bowline, there is an easily remembered right-hand–twist method to use.


There are many instances when a loop in the middle of a line is called for.  As an example, for safety a mountain climber might tie himself to a middleman’s knot in the center of a climbing rope.  While a simple overhand loop might suffice in this application – it could become difficult to untie after being stressed.  The addition of another twist to the overhand loop results in the so-called Figure of Eight loop which is probably more efficient and much easier to untie.  Some might consider the Figure of Eight loop (or Flemish loop) preferable to comparable mountaineering knots like the Alpine Butterfly, merely because it is simpler and easier to remember.


The granddaddy of all “ascending knots” or “friction hitches” is the venerable Prusik knot which was first created during WWI and named for its inventor.  The Prusik can be doubled (with 6 coils rather than 4) to produce more traction.  The younger Kleimheist also shown in the illustration below is also popular with modern day climbers.


Few good (simple) ascending knots for mountaineering can be tied with nylon webbing.  The Heddon and double Heddon knots shown next are exceptions that seem appropriate.


The Trucker’s hitch is an important and utilitarian cinching knot that is actually a compound construction of two other knots.  Disregarding friction, the Trucker’s hitch can tightly strap down loads on trucks, trailers, boats and pack saddles because it applies a 2:1 mechanical advantage.  The standing line employs a ring, carabineer or middleman’s loop while the cinch is tightened with the tag end.  After the cinch is drawn tight the pressure is held by pinching the bight with one hand, before finishing with a simple slipped overhand knot.


The finial knot (of the six most crucial selected here) is the excellent, general purpose ‘clove hitch’.  It is mentioned last because many admirable variations have been conceived from it, and illustrations of a few of those will follow.


Excellent for sacks and trash bags the “constrictor knot’ differs only slightly from the clove hitch, but holds more firmly.  It can be hard to untie unless intentionally slipped with a draw loop.



When wrapped around a tent stake the “taut line hitch” below is useful for tensioning a tent guy line.  To the right of that is a useful clove hitch variant that has no recognized common name or ABoK number.  Tentatively referred to as the wireline hitch here, the grip of this variant is superior to the taut line version.



A few more knots _ deserving honorable mention

Strong and efficient the ‘Palomar knot’ is useful for attaching large hooks, lures or sinkers to a fishing line.


The “Surgeon’s loop” is another simple and effective knot for attaching small lures or flies to a tiny mono-filament fishing line.  Knots like the surgeon and Palomar are cut away rather than untied after they serve their purpose.


The “Ossel hitch” is an ancient knot; no one knows how old. It is or was a simple, secure and effective knot used to suspend gill nets from a larger line.  Strangely the ossel hitch is not recognized in Ashley’s encyclopedia.  This may be because “ossel” is a Scottish word and was not that familiar when Ashley illustrated his book.  There is a similar but different knot in the encyclopedia known as the “Netline Knot” (ABoK #273) that hails from Cornwall on the southern coast of England.


This simple Anchor Bend variant below is easily remembered and is much more secure than the parent knot.


Finally, this old page construction below introduces a couple of utilitarian gripping hitches




This is a blog post and not an encyclopedia therefore most knots cannot be shown.  Returning to the off topic tangent of knot mathematics we come to a group of abstract ideas known as graph theory which foreshadowed or laid the foundation for topology.  The father of graph theory was a Swiss mathematician and physicist named Leonhard Euler.   Leonhard discussed a notable historical problem in mathematics called “The Seven Bridges of Konigsberg”.  The unsolvable problem was to walk through the city, crossing each bridge once and only once.  What is called Euler’s solution became the first theorem of planar graph theory.


* Back in 1735 the seven bridges of Konigsberg were real and that city was part of the Prussian Empire and bordered Poland on the Baltic. Konigsberg, Prussia became Kaliningrad, Russia (54°42’12” N, 20°30’56”E) sometime after WWI. After the breakup of the Soviet Union, Kaliningrad and surrounding province became physically separated from the rest of Russia. After another world war and the ravages of time only two of the original bridges from Euler’s time survive. Five bridges now connect the city and islands formed by the Pregel River.

A similar conundrum that Euler might have considered had he the chance is the hypothetical house with five rooms and sixteen doors. The object is for a person to walk through each door once, but one time only.


Finally we come to the perplexing Mobius strip and Trefoil knot. The naughty Mobius strip is something of a paradox. The single edge of a Mobius strip is topologically equivalent to the circle and mathematically it is non-orientable.


A physical Mobius strip can be constructed from a belt or strip of paper.  One simply grabs the two ends and gives one end a half twist before taping the two together in a loop.  The resulting surface then has only one side and one edge.  Imagine a miniature gravity defying car driving around the surface of the strip.  If the car began on the top side of the surface then its path after one revolution of the loop would place it on the bottom side of the surface.  Consider a bug dragging a paintbrush while walking along the right edge of the strip and making two revolutions of the loop.  We perceive two edges to the strip but realize there is only one.

M.C. Escher incorporated the Mobius strip in some of his graphical art.  In the real world recording tapes and typewriter ribbons have been spliced in the continuous-loop – Mobous strip fashion to double playing time or ink capacity.  Large conveyor belts have also been wrapped the same way, to increase belt life by doubling the surface area.  The Mobius strip has several curious properties.  A continuous line drawn down the middle of the loop will be twice as long as the same loop.  Cutting this paper loop down the centerline will produce one long loop with two twists (not two strips) and finally two edges.  Cutting this longer strip again as before, will produce two strips, each with two full twists and intertwined together.


In topology the “unknot” is a circle and the “trefoil knot” is the simplest knot. Named after the plant that produces the three-leaf clover, the trefoil knot can be tied by joining together the two loose ends of a common overhand knot, but this results in a knotted loop.  Although it doesn’t look very convincing when done with paper, a trefoil knot can also be constructed by giving a band of paper three half twist before taping the ends and then dividing it lengthwise.


Solar energy at home

Most of the energy we earth bound humans consume comes directly from the sun, exceptions being atomic fission and some types of chemical reactions.  Fuel oil, coal and natural gas energy that civilizations use exist because of the Sun’s previous contribution in the formation of those hydrocarbons.  Wind currents are caused by the sun warming the air and as thermals rise they are displaced by denser, colder air.  Likewise the sun’s energy is ultimately responsible for distributing snow melts and rainwater water to higher elevations, which create the kinetic energy needed to power watermills and hydroelectric generators.  On a small personal scale, more individuals are learning to exploit the sun’s energy to heat their homes, generate their own power or to cook their food.  The two main methods of acquiring power from the sun are photovoltaic (PV) cells and thermal energy collectors.

Almost 53% of the energy in sunlight is absorbed or reflected before it even hits the surface of the earth.  The glazing or protective substrate in a solar collector can further diminish the amount of energy obtained.  Even the best solar panels can be considered to be inefficient.  The amount of energy collectible by a given solar panel is subject to many variables.  Whether talking about heat or electricity we generally measure that energy in units of Watt-hours (energy = power x time).  Under the best and brightest conditions a panel might collect as much as 2,000 Watts per sq. meter but under realistic or averaged conditions the expectation might only be half that.  During the 8 daylight hours of a normal summer day at 40 degrees latitude, a solar collector would be doing good to get 600 Watts per sq. meter, average.  In wintertime for the same location the same collector might gather an average of only 300 Watts per sq. meter for the same time period.  For any random location around the earth the average collectable solar energy per mean solar day (24 hours) is only about 164 Watts per square meter.


Overview of PV

In a photovoltaic solar cell an electrical charge is generated when photons excite the electrons in a semiconductor.  There are many types of solar cells and even some new developments in technology which will hopefully lead to the future manufacture of more affordable photovoltaic solar panels.  The warmer the photovoltaic solar panel gets the less power it can produce.  Essentially the temperature doesn’t affect the amount of solar energy a solar panel receives, but it does affect how much power you will get out of it.

The most common photovoltaic solar cells are made by chemically ‘doping’ a very thin wafer of otherwise pure monocrystalline (single-crystal) silicon.  In a delicate and complicated process of fabrication, wafers of silicone are generally cut or sliced as thinly as possible (before they crack) to a thickness of about 200-micrometers or the width of a typical moustache hair.  Since each individual solar cell produces only about 0.5V, several cells must be wired together to produce a useful photovoltaic array.  Mostly produced in China, commercial photovoltaic solar panels are very expensive, averaging $2 – $3 cost for every single watt they produce.  An average U.S. residence consumes something like 30.6 kWh per day, 920 kWh per month or 11,040 kWh /year.  In a country like the U.S. where grid power is comparatively cheap (averaging 10 cents per kWh in 2011) it would take a very long time for photovoltaic panels producing equivalent energy to pay for themselves.  In the meantime an individual with a “do it yourself” mentality can more directly utilize solar energy by fabricating his own contraptions to collect heat.


Solar Ovens

Although it would not be considered a quick process, it is easy to cook food with direct sunlight.  Slow cooking oftentimes creates superior dishes with the best blend of flavors.  Some heat trap type solar ovens can easily produce temperatures over 250 deg F; sometimes up to 350 deg F.  No matter what type of oven is used however (electric, gas, solar, smoke pit or Dutch) a good cook knows that slow cooking with a modest heat over a long period, will make an otherwise tough piece of meat more tender.


Essentially there are only two types of solar oven; those that entrap heat or those that reflect it.  To form a simple ‘heat trap’, a cardboard or wooden box can be insulated, spray painted black inside and then lidded with glass or clear plastic.   It helps when the cooking vessel itself is dark also – to better absorb solar heat.  In addition to being dark, it helps when pots are thin and shallow and have tight fitting lids.  Even glass mason jars make useful solar cooking utensils.  These can be spray painted black and the lids can be unscrewed a bit to allow vapor pressure to escape.   It might seem that parabolic or concave reflecting cookers would be complicated to construct, but some examples have been made by simply surfacing the inside of umbrellas or parasols with aluminum foil.  Mirrored Mylar or similar BoPET films are also useful materials in this type of application.  Doubtless many examples or ‘instructables’ detailing the construction of reflective type solar ovens, exist elsewhere on the Internet.  Some specially constructed reflective ovens claim to be able to reach temperatures of nearly 600 degree F.

The importance of cooking some foods, especially meats, is to kill bacteria.  Bacteria won’t grow below 41 deg F or survive above 140 deg F.  The internal temperature of meats needs to reach a range between 140 deg F and 165 deg F to be considered safe.  Seafood needs to be cooked to 145 deg F or hotter.  To rid poultry of salmonella, poultry must reach 165 deg F on the inside.  Egg dishes should reach the same temperature.  Trichinosis is halted by cooking pork to about 160 deg F.   Ground beef should reach 155 deg F for safety.


Solar stills

Back in the 1960’s a pair of PhD’s working in the soil hydrology laboratory for the USDA invented a solar evaporation still that could suck useful drinking water out of the ground.  Even in the arid desert around Tucson, Az. where they were located, they realized that the soil entrapped useful moisture.  Such a solar still is made by digging a pit in the ground, placing a collection pot in the bottom and covering the hole with a sheet of plastic.  Additional moisture could even be gathered by placing green vegetation under such a tarp.

It seems that the first evaporative solar stills were invented back in the 1870’s to create clean drinking water for a mining community as explained in an earlier post in this same blog named “The Nitrate Wars”.   This same distillation where moisture is evaporated before the condensation is collected, is employed in affordable, plastic-vinyl inflatable stills that can equip small boats and survival craft at sea.  Where once stranded fishermen and sailors faced a death by dehydration they now have the opportunity to create the drinking water they need from seawater.  Muddy or brackish germ infested groundwater can be reclaimed in the same way.


There are several possible techniques to employ and efficiency factors to consider when fabricating an evaporative solar still.  Obviously good direct sunlight is essential to their efficient functioning.  The ‘basin type” solar still is the most common type encountered and somewhat resembles a heat trap solar oven.  In a “tilted wick” solar still, moisture soaks into a coarse fabric like burlap and climbs the cloth before it eventually evaporates.  In higher latitudes ‘multiple tray’ tilted stills can be used, where the feed water cascades down a stairway of trays or shelves, allowing closer proximity to the glass and enabling steeper tilt angles for the panel to capture optimum sunlight.



Other liquids besides drinking water can be refined in an evaporative solar still.  Ethanol can and has been concentrated from mashes, worts, musts or washes using a solar still.   Since a distiller usually desires more direct control over temperatures however, he might consider solar stills to be practical only for so-called “stripping runs”.   Some of the earliest perfumes were created from fragrances collected by distillation.   Soaking wood, bark, roots, flowers, leaves or seeds of some plants in water before distilling the mixture, is a common way of obtaining aromatic compounds or essential oils.   Not all plant fragrances should be distilled but eucalyptus, lavender, orange blossoms, peppermint and roses commonly are.   The lightest fractions or volatiles of petroleum (like gasoline) separate at temperatures available in solar stills, but the heavier ones will not.  Theoretically it should be possible to place slip or crude oil into a solar still to separate out the gasoline and higher fractions.


Solar water & air heating

Most readers will have experienced how water trapped in a garden hose will get hot on a summer day.  Portable camp showers are simple black water bags, suspended at a little elevation and in direct sunlight to warm the water.


Where climatic conditions permit people may employ gravity fed or pump pressurized waterlines and tanks on rooftops or simply along the ground to achieve the same solar water heating effect.  Others may construct or install dedicated solar heating water panels to heat swimming pool water or to pre-heat water before it enters their home’s gas or electric water heating tank.


The construction of a solar water heater and a solar air heater can be very similar in concept.  Basically air or water is conducted through pipes or conduits to a panel where the heat exchange takes place.  Copper pipe might be the most desirable material to use in a solar water panel because of its pressure holding ability, resistance to corrosion and longevity.  Thin walled pipes of cheaper metals can be used to adequately exchange or transfer heat to air that passes through them.  A growing fad in the construction of homemade air-heating solar panels is to build the collector with empty aluminum beer or soda cans.  The tops and bottoms of the cans are punched or drilled out and the cans are glued together to form a continuous airtight pipes.  The box that holds everything is well insulated (sides and bottom) every interior surface exposed to sunlight is spray painted a dark, sunlight absorbing color – preferably using a high quality, high temp, UV protected paint.  A transparent glazing (of glass, plastic, fiberglass, Mylar, acrylic, polycarbonate, etc.) is tightly sealed over the top of the trap.  A double or even triple layer of glazing is preferable to a single one to reduce the escape of thermal heat.  While beer and soda cans are popular because of their availability and affordability, equally efficient collectors could be made from tin cans (made of metal called tinplate), rain gutter downspouts, old aluminum irrigation pipes, single walled stove pipes or even from bug screen like you’d find on a window.  This site, chosen from many that discuss solar heating with air, suggest that bug screen collectors are on par with soda can collectors and are possibly easier to construct.

In the choice of fan or blower used to push or pull air through the system, it is preferable to circulate a large volume of modestly heated air rather than a small quantity of thoroughly heated air.  Ideally a solar panel can increase the heat of the air passing through it as much as 50 or 60 degrees F.   In this type of collector an optimum airflow rate of 3 CFM per square foot of absorber has been suggested.  In general the larger the solar air panel, the better – small ones are probably not worth considering.  They should be built with quality paints, glazing and other components where possible to resist corrosion and decomposition from sunlight and other climatic elements.

Pointing solar panels


For optimum efficiency any solar panel should face the sun at a perpendicular angle.  The position of the sun changes constantly however throughout the day.  Some institutions or uber rich people might purchase solar trackers which employ servo or stepper motors to keep photovoltaic panels aligned with the sun.  Such ‘trackers’ increase overall efficiency by increasing morning and afternoon light collection.  The rest of us however have to make do with permanently fixed or periodically adjustable panel mounts.  Normally the bases of fixed panels are aligned perpendicular to due (not magnetic) south.  Some owners of grid tied solar photovoltaic panels however are deciding to aim their panels towards the west.


The effectiveness or efficiency of a given solar panel is definitely affected by its proper orientation to the sun but as the sun moves around a lot, solar panels that do not automatically track its movement must seek a positional compromise.  The sun’s apparent altitude in the sky changes throughout the year.  Because of the earth’s motion the sun’s altitude appears to vacillate 23.5 degrees between summer and winter solstices or every 6 months.  Solar panels near the equator can be positioned parallel with the horizon and largely remain efficient by just pointing straight up.  The further a location is from the equator the more vertical a panel’s ideal tilt becomes.  Above the 45th parallel, vertically fixed solar panels mounted to the side of a building can preform admirably in the wintertime.  There is no one perfect tilt angle with which to keep a solar panel perpendicular with the sun’s rays throughout the year.  This fact motivates some people with adjustable panel mounts to periodically climb up on their rooftops with wrench in hand to refine panel tilt.  Others might wish to install a solar panel permanently in the best year round average position and not worry about adjustments.

Older literature for solar panel installation might quote a rule of thumb where 15 degrees are added to latitude for wintertime panel tilt, or 15 degrees of angle are subtracted from latitude to acquire summertime panel tilt.  A more modern set of calculations being mimicked or repeated often around the web, suggest wintertime tilts that are a bit steeper than common to capitalize on midday rather than whole day solar gathering and flatter than normal summertime tilts favoring better whole day rather than midday collection.

-To calculate the best angle or tilt for winter:

(Lat * 0.89) + 24º = ______   (The latitude is multiplied by .89 and added to 24 degrees)

-The best angle for spring and fall:
(Lat * 0.92) – 2.3º = ______

-The best angle for summer:
(Lat * 0.92) – 24.3º = _____

-The best average tilt for year round service:
(Lat * 0.76) + 3.1º = _____

For the purpose of illustration a latitude of 35 degrees North will be chosen.   Locations somewhat close to this latitude include: the Straight of Gibraltar, Tunis Tunisia, Beirut Lebanon, Tehran Iran, Kabul Afghanistan, Seoul Korea, Tokyo Japan – and in America, cities along Interstate 40 or the old Route 66 (Raleigh NC, Memphis Tennessee, Fort Smith AR, Oklahoma City OK, Albuquerque NM, Flagstaff AZ and Bakersfield CA).





A couple of sources for more information:



Metrification for the masses

*  When they weren’t lopping off every other person’s head in France during the revolution which began in 1789, reformers in that country seized the opportunity to make all kinds of other acute changes.  In 1791 for instance, the French Academy of Sciences was instructed to create a new system of measurements and units.  For two centuries now the rest of the world has been brow beaten and cajoled into adopting this sublime system of weights and measures and this process is called metrification.  While most nations have capitulated to the apparent intellectual supremacy or empirical advantages of the metric system, there are still some holdouts in the world.  After two centuries these non-metricated miscreants still drive the more rabid reformatory zealots of metrification, nuts.  Perhaps there are logical reasons in a few instances, not attached to loyalty or laziness, that compel these non-metric holdouts to hang onto some traditional weights and measures.

*  Feeling particularly erudite the reformatory French academics chose to base this metric system on natural values that were unchanging and reproducible, and to use numerical units based on the powers of ten.  Unchanging natural values were hard to coral back in 1791 so the official definitions of all the basic metric units have undergone several changes since then.  The metre is the most fundamental metric unit and from it the other units were originally derived.  American dictionaries, spell checkers and text books won’t even spell the word right.  Technically a “meter” is just a measuring device.  If you’re going to adopt French units you might as well swallow their spelling.   Like the non-metric nautical mile, the metre was originally conceived as being a portion of the earth’s circumference.


*  While the older nautical mile was defined as a minute (1/60th of a degree) of arc along a meridian of the Earth, the new metre was conceptualized as being 1/20,000,000th part of that same meridional distance.  Even before the oblatness of the earth was realized, French surveyors in the 1790’s determined a very fair approximation of what a metre should be.  Since that time the length of the metre has grown 0.2 mm longer.  Today most air and sea navigators still prefer to use non-metric nautical miles rather than kilometers because when using charts (nonlinear, 2-dimensional, mercator projections or maps) it makes life a lot easier.

*  It quickly became self evident that the intended international reproducibility of an accurate metre using the meridional definition was so impractical that a physical artifact had to be produced. In 1799 a platinum bar called the “mètre des Archives” was made and used as a copy reference.  In 1875 “Convention du Mètre” or Metre Convention was instituted to oversee the development of the metric system.  Conceived at the same time CGPM (“Conférence générale des poids et measures” or the General Conference on Weights and Measures) was established to democratically coordinate international participation by holding meetings every 4-6 years.  Broad acceptance of metrification did not really begin to take hold until after WWII and the creation of the European Union. SI or “Système International d’Unités” is today’s official name for the metric system as ordained by the CGPM in 1960.

Confusion and inconstancy

*  There are inconsistencies in the metric system.  The redefinitions of base units have been frequent.  The SI crowd has begrudgingly adopted non decimal units like seconds of time because they can produce no better alternative.  The SI intellectuals have regularly discouraged the use of seemingly compatible units and nomenclature simply because they themselves did not originally create or sanction them.  These same intellectuals have also adopted redundant and unnecessary units and nomenclature when simpler alternatives already existed.  Some unpopular and clumsy sounding SI units are floating around.

*  The currently approved MKS (metre, kilogramme, second) system of units supplanted the older CGS (centimeter, gram, second) system.  It was once simple to think of a gram in terms of the weight one cubic centimeter of water at the melting point of ice.  Although originally a base unit the litre (or liter) is no longer even an official SI unit!  The kilogram originally equaled the mass of a litre (1,000 cubic centimeters) of that same cold, pure water.  Obviously these definitions were not good enough because they no longer apply.  The kilogram is the only metric base unit that hasn’t been redefined in terms of unchanging natural phenomenon.  The authoritative kilogram is an object!  You can’t just produce an accurate kilogram in your laboratory located in Timbuktu.  In a dark vault somewhere in Paris sits a precious SI manufactured artifact.  Today’s official kilogram is a cylinder of 90% platinum and 10% iridium alloy.  Where once the metre was defined as one ten millionth of the distance between the North Pole and the Equator, it was eventually redefined as a multiple of a specific radiation wavelength.  Today’s official redefinition of the metre is as a fractional part of the distance traveled by light in a vacuum.

*  The concept of time and the replacement of legacy time units with suitable modernized counterparts have vexed zealous metric reformers for two centuries.  Years, months, weeks, days, hours, minutes and seconds are not decimally related.  We are gifted with impressive sounding terms like nanoseconds, kiloseconds or milliseconds but the ‘second of time interval’ is adopted by the metric system; it was not an original metric unit.  It was belatedly defined by SI as being: 1/86,400th of a mean solar day.  The metric second was later redefined in terms of astronomical observations.  Even later the second was redefined by the oscillations of a tuning fork, and then again by oscillations of a quartz crystal.  Today the SI second is officially defined as- “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom”.  Who knows how they’ll redefine the second, next year?

*  Whereas a regular non-metric Imperial ton (or short ton) weighs 2,000 lbs, a ‘long ton’ or ‘gross ton’ typically used in shipping cargo weighs 2,240lbs.  A “metric ton” or “tonne” weighs 1,000 kilograms or 2,204.6 lbs.  When appending the prefix “kilo” to ton, things start to get confusing.  In terms of explosive force a kiloton might mean the equivalent of 1,000 metric tons of TNT.  As a unit of weight or mass however a kiloton might mean either 2,000,000 lbs or the same as a kilotonne (2,204,622.6 lbs).  A gigagram would equal a kilotonne but that term is infrequently used.

* The seven current hallowed SI base units are the metre, kilogram, second, ampere, candela, mole and kelvin.  The Kelvin scale is an absolute thermometric scale but units are not referred to as degrees.   We normally call increments of temperature “degrees” because of a decision made way back in 1724 by a German physicist named D.G. Fahrenheit.   The Fahrenheit scale divides the range between the freezing and boiling points of water into 180 equal parts – like the degrees in geometry for half a circle.   D.G. Fahrenheit also invented the glass/mercury thermometer.   About two decades later but still well before the French reforms a Swedish astronomer named A. Celsius, borrowed Fahrenheit’s idea but divided the range by only 100 equal parts.  Originally Celsius’s scale ran backwards or counter intuitive to today’s usage but that situation was reversed after his death in 1744.   From 1744 to 1948 the units of what we now call the Celsius scale were better known as degrees of “centigrade“.   Eventually an Irish/British physicist named W.T. Kelvin comes along with further suggestions for improvement.   The Kelvin scale begins at absolute zero – there is nothing colder.   To make the Kelvin (K) scale fit in with the decimalized Celsius scale, the triple point of water (where gas, liquid, and solid phases of water coexist in thermodynamic equilibrium) had to be defined as exactly 273.16 K.   In other words the base ten loving SI / metric system uses for one of its base units, values derived from a very inconsistent fraction (1 / 273.16 – whose ungainly reciprocal is 0.003661).

Discouraged, clumsy, slang or unneeded

*  Created by a Swedish astrophysicist the tiny increment of length called an angstrom is exactly equivalent to 0.1 nanometre or 0.000,000,0001 metres.  Mention however of the non-Imperial and non-metric but internationally recognized angstrom is officially discouraged by the SI’s International Committee for Weights and Measures.  The small calorie itself was a pre-SI metric unit of energy defined in the 1820’s as the energy needed to raise 1 gram of water, by 1º C.  The dietary calorie (kilogram, large or food calorie) is 1,000 times larger.  The small calorie is obsolete or replaced in preference by the official SI “joule”.  Megbars, kilobars, bars, decibars, centibars and millibars of atmospheric pressure – are not SI units.  It takes 100,000 SI legitimate pascals to equal one bar.  One bar is roughly equivalent to one standard atmospheric pressure at sea level (14.69 psi or 101,325 pascals).  Meteorologists and weather reporters usually prefer to describe changes in air pressure in terms of millibars rather than in the exactly equivalent hectopascals; it just sounds better.  In oceanography during a decent from the surface, drop in metres and increased water pressure in decibars correspond nicely.

*  In an weak attempt to sound sophisticated a scientific journalist might employ the term “kiloannum” to impress his audience rather than use the simpler terms “millennia” or “a thousand years”.  Students might use the jargon “Fermi” rather than the more proper but awkward SI term femtometre to describe infinitesimal nuclear distances.  Attometres, zeptometres and yoctometres are smaller yet.  In astronomy where great distances are expressed one might seldom encounter the SI terms megametre, gigametre, terametre, petametre, exametre, zetametre or yottametre.  The most common vernacular one finds instead are the non-metric light year, parsec and astronomical units.  The “astronomical unit” (which is roughly the mean distance between earth and sun) was thought up in the 1970’s by the IAU (International Astronomical Union – also hosted by France) to patch up shortcomings in regular SI units caused when incorporating general relativity theory.  SI brings us gawky sounding terms like “gray” and “sievert”.  Theses terms were added to the dictionary not because they were necessary, but because they could be branded by SI authorities whereas “rad” and “rem” could not.  A gray is simply 1,000 times bigger than a rad and both units express energy radiated or absorbed.  A sievert is simply 100 times bigger that a rem and both units attempt to adjust radioactive dosages by accounting for type of tissue and type of radiation.

A Short Imperial unit background 

*  Maligned and criticized for still using Imperialistic old fashioned weights and measurements when the rest of the world does not, the American public has shown resistance to metrification.  Primarily a British colony in the beginning, America inherited British imperial units, which were in turn heavily influenced by historic French and even ancient Roman measurements and weights.  The avoirdupois system of weights that Americans favor was actually developed by the French.  The Troy weight system of units of mass still used in many locations around the world for quantifying precious commodities like gold, platinum, silver, gemstones and gunpowder – is also French (believed to be named for the French market town of Troyes in France).  Closely related to Troy weight, the apothecaries’ system of weights favored by physicians, apothecaries and early scientist has roots reaching all over central Europe and the Mediterranean.  The apothecaries’ system of weights was still being used by American physicians and pharmacist into the 1970’s.  After America separated from the British Empire the Americans kept the legacy units pretty much intact while the British did not.  Parliament by meddlesome act or decree and mostly for the purpose of increased taxation, continued to make small changes to certain units of mass and volume.  These changes caused much confusion between American and British (pre-metric) imperial units, which still exist today.


*  Without digressing too far from the subject of metrification: it should be explained that without the discrepancy between wine and beer cask and the British adoption (1824) and eventual retraction of the “stone” unit, that the impetus behind a one world metric system would never have been so great.  The legislated stone unit demanded a redefinition of several standard weights.  Today’s Imperial gallons, bushels and barrels are so screwed up because yesteryear’s hogsheads (drytight cask filled with wine, beer, liquor, whale oil, tobacco, sugar or molasses) were of different sizes.  A hogshead of wine has traditionally held more volume than a hogshead of beer.  In its defense Parliament did try to standardize hogshead volume back in 1423 but this had little effect.  Coopers at different locations made cask as they saw fit and eventually there became an accepted and even official difference in hogshead volumes depending on contents.  A multiplicity of different gallon, bushel and barrel definitions followed suit.  The UK Imperial gallon springs from the ale gallon but the U.S. liquid gallon is based upon the 1707, Queen Anne wine gallon.  Even today this curious distinction between wine and beer continues as the American BATF and Treasury Department require different labeling on the two beverages.  Wine and stronger spirits are labeled only in liters or milliliters while beer containers are labeled only in gallons, quarts, pints or ounces.

* The bushel used to be a measure of volume for grain, agricultural produce or other dry commodities.  Bushels are now most often used as units of mass or weight rather than of volume.  It should be realized that a bushel of each commodity in the mercantile exchange market is completely unique and different.  A bushel of corn weighs 56 lbs. but a bushel of soybeans or wheat weighs 60 lbs.  A bushel of plain barley weighs 48 lbs. but a bushel of malted barley weighs only 34 lbs.  A bushel of oats in the U.S. weighs 32 lbs. but across the border in Canada, it weighs 34 lbs.  Okra weighs 26 lbs. per bushel and Kentucky Blue grass seed only 14 lbs.  Many other commodities exist, whose specific values fluctuate according to the jurisdiction (country to country; state to state).  Pork bellies (the valuable bacon only) are traded by weight (one unit equals 20 tons of frozen, trimmed bellies).  The rest of the hog’s carcass in a commodities market is expressed as Lean Hog futures.  Refined oil might be shipped in 55 gallon drums (first created in WWII to ship liquids) but crude oil is measured and traded on the standard 42 U.S. gallon, historic common wooden barrels of yesteryear.  Barrels of other commodities often contain a volume of 31.5 U.S. gallons.


*  Where the Imperial system does not fail and probably needed no replacement is in its units of length, distance and area.  Imperial units of length were intuitively developed over the ages.  Metric units of length might be more easily abstracted numerically in calculations for pencil pushing types, but these are not nearly so instinctive for everyday usage.  Engineers and architects seldom have to build what they design; that labor falls to builders, millwrights, manufacturers, fabricators and others who work with real materials on a daily basis.

*  Consider the Imperial ruler or tape measure and its metric counterpart.  Working with fractions a fairly accurate Imperial ruler could be reconstructed by almost anyone given an empty room, a pencil, a pair of scissors and a strip or two of unmarked paper exactly one yard in length.  Feet, inches, half-inches, quarter-inches, eight-inches and perhaps sixteenth-inches could be adequately marked upon a blank yard long strip of paper.  In contrast it would quickly be realized that an adequate depiction of centimeters and millimeters could not be intuitively described upon a blank, metre long strip of paper.  There can be another eloquence in fractions.  Builders and fabricators familiar with feet and inches can often perform the type of mental arithmetic that would send their decimal loving metric counterparts scurrying for the nearest calculator or pencil.


*  Americans find many customary units desirable and appropriate.  Non SI unit terms like liquid ounces, shots, gills, noggins, fifths, teaspoons, cups, pints, quarts, gallons, barrels, board feet, pecks, bushels, BTUs, milibars, carats, cycles per second, pounds, ounces, troy ounces, drams, tons, caliber, mils, standard gauge, rods, chains, inches, feet, yards, furlongs, miles, nautical miles, fathoms, knots, picas, angstroms, light years, parsecs, acres, townships and sections remain in the American vernacular.  The sluggish progress in thorough American metrification has been excused as the result of ignorance, laziness or complacency by the public.  That may be.  Remember though that American schools have versed students in the metric system for the last 50 years or more.  We can use SI whenever we want to.  We’ve experienced strong-arm attempts to have SI foisted upon us as in the Metric Conversion Act and the Fair Packaging & Label Act.


*  Never the perpetrators of a bloody social revolution like the Russian one, or France’s where mobs decapitated anyone who thought differently or had money, Americans might simply resist metrification because they resist anything totalitarian by nature.  That’s what metrification is; a totalitarian ideal.  It request the wanton destruction, scourge, eradication and abandonment of any other competing form of weights and measurement.  So who’s the real bigot; the unassuming Japanese or American builder who finally learns how to use a conventional tape measure well and sees no reason to change it or some frustrated high school chemistry teacher who wants a dumb-ed down tape measure and for all other alternatives in the world to be immediately destroyed?

*  An otherwise thoroughly metricated country, Japan’s carpenters, builders and realtors still favor their shakkanho length measurements which were acquired from ancient China.  The shaku is the base unit and was originally the length from the thumb to the extended middle finger (about 18 cm or 7 in).  That length grew to approximately 30.3 cm, or 11.93 inches (Kanejaku or “carpenter’s square” shaku).  Floor space in a Japanese house is usually described in terms of a number of single traditional straw tatami mats or a square of two tatami mats (tsubo).  The koku, defined as 10 cubic shaku is still used in Japanese lumber trade.

*  In order to prevent fines and prosecution that other non-SI compliant merchants in Europe have been hit with, the British and Irish have seen fit to pass legislation which protects their traditional non-SI whiskey and beer rations (like gills, pints and Imperial gallons).  When it came to alcohol, it seems as if the rigors of metrification hit a little too close to home.   The UK decimalized its currency back in 1971 and it is the only EU member to have retained its own monetary system- which is also the oldest monetary system still in use.   Few things are as frustrating for a foreigner to comprehend as the meaning of old English tower pounds, sterling pounds, gold sovereigns, guineas, quid, fivers, coppers, crowns, shillings, sixpence, halfpennys, farthings and tuppence.  If there were space enough left in this post these could be explained.  Some of the old legacy Imperial units mentioned previously have very interesting backgrounds as well but explanations will have to wait.  The topic of this post has been the triumphant march of metrification and the liberating, joyful piece of mind and harmony it will bring to the world once its total acceptance is finally complete. —————————–

Captured from an e-mail years ago: somewhere an anonymous wit promotes these additional units – lest they become forgotten in the march of time also…

* 1 millionth of a mouthwash = 1 microscope

* Ratio of an igloo’s circumference to its diameter = Eskimo Pi

* 2,000 pounds of Chinese soup = Won ton

* Time between slipping on a peel and smacking the pavement = 1 bananosecond

* Weight an evangelist carries with God = 1 billigram

* Time it takes to sail 220 yards at 1 nautical mile per hour = Knotfurlong

* 16.5 feet in the Twilight Zone = 1 Rod Serling

* Half of a large intestine = 1 semicolon

* 1,000,000 aches = 1 megahurtz * Basic unit of laryngitis = 1 hoarsepower

* Shortest distance between two jokes = 1 straight line

* 453.6 graham crackers = 1 pound cake

* 1 million-million microphones = 1 megaphone * 2 million bicycles = 2 megacycles

* 365.25 days = 1 unicycle

* 2000 mockingbirds = 2 kilomockingbirds

* 52 cards = 1 decacards

* 1 kilogram of falling figs = 1 FigNewton

* 1,000 milliliters of wet socks = 1 literhosen

* 1 millionth of a fish = 1 microfiche

* 1 trillion pins = 1 terrapin

* 10 rations = 1 decoration

* 100 rations = 1 C-ration

* 2 monograms = 1 diagram

* 4 nickels = 1 paradigms

* 2.4 statute miles of intravenous surgical tubing at Yale University Hospital = 1 IV League and…

* 100 Senators = Not 1 good decision

Yeast & Fermentation

This post endeavors to briefly illuminate a particularly minuscule organism that since the dawn of mankind has exerted considerable influence over the human condition.  Found in the dirt, air and water some yeast also subside naturally – inside all vegetation, animals and humans.  All fungi are parasitic or saprophytic and cannot manufacture their own food.  Since yeast are fungi and all fungi are heterotrophs that live on preformed organic matter some yeasts have been using mankind for far longer than he has been using them.   To state that mankind has domesticated yeast for thousands of years is probably an erroneous statement.  Whether he knew it or not however mankind has been exploiting these individually invisible microorganisms for his own benefit for perhaps ten millennia or more.  The historic relationship between brewing and baking is more intertwined than most readers may appreciate.  Today yeasts are also used to produce food additives, vitamins, pharmaceuticals, biofuels, lubricants and detergents. The more one learns, the more his appreciation grows for these seemingly simple little life forms.  It doesn’t take a degree in organic chemistry or molecular biology to put these little critters to productive work.

Yeasts are more evolutionary advanced microorganisms than say prokaryotic organisms like viruses and bacteria.  Prokaryotes don’t have a nucleus.   Higher life forms like onions, grasshoppers, humans and yeasts are eukaryotes which means their cells store genetic information within a nucleus.  Simpler and more basic than human cells and easier to work with, bread yeast (Saccharomyces cerevisiae) was the first eukaryotic organism to have its genome be fully sequenced.  A genome is the hereditary information stored in an organism – the entire DNA/RNA sequence for each chromosome.

The S. cerevisiae yeast genome possesses something like 12 million base pairs and 6,000 genes compared to a more complex human genome with 3 billion base pairs and 20,000 -25,000 protein coding genes.  Although sequencing has become easier in recent times, 18 years ago the thorough examination of Saccharomyces cerevisiae’s (beer yeast) genome was no simple task.  That project inspected millions of chromosomal DNA arrangements, involved the efforts of over 100 laboratories and was finally completed in 1996 after seven years of hard work.

* The 6th eukaryotic genome sequenced was also a yeast (Schizosaccharomyces pombe – in 2002) and it contained 13.8 million base pairs. 

The mentioning of this 1st accomplished genome sequencing is significant because it was to cause an upheaval in the current accepted classification of yeast species.  There are probably a great number of yet undiscovered yeast species in the wild but presently only a small percentage (between 600 and 1,500 species depending upon your source of information) are currently cataloged.  One of the more important fungi in the history of the world, the classification of Saccharomyces cerevisiae species is very much in a malleable state of flux.  You may read about the many types of bread yeast, or the hundreds of “varieties” of beer yeast or the hundreds of “strains” of wine yeast – but for the most part these share the same DNA and therefore must be considered the same species.   With beer and especially with wines the choice of yeast (strain or variety and species where applicable) can profoundly influence the outcome of the beverage’s flavor profile.

Bad fungus

“Almost all yeasts are potential pathogens” but none of the Saccharomyces species or close relations have been associated with pathogenicity toward humans.   “Candida and Aspergillus species are the most common causes of invasive fungal infection in debilitated individuals”, with 6 species (Candida: albicans, glabrata, krusei, neoformans, parapsilosis & tropicalis) accounting for about 90% of those infections.

Other multi-cellular (non-yeast) fungi affect humanity in various ways: Trichophyton rubrum and / or Epidermophyton floccosum bring us athlete’s foot, ringworm, jock itch and nail infection.  A member of genius Penicillium (with over 300 species) brings us a life saving antibiotic which kills certain types of bacteria in the body.   Claviceps purpurea or “rye ergot fungus” – if not immediately lethal or debilitating, brought us a mind altering alkaloid similar to LSD.  One of the more important negative influences fungi exercise upon us is in the capacity to destroy food crops.  

Domestication ?

A defining characteristic of domestication is artificial selection by humans.  Domestication means altering the behaviors, size and genetics of animals and plants.  These things were not done to yeast in antiquity.   Isolation of certain beneficial yeast strains was only beginning some 200 years ago, in breweries.  Only recently (by 1938) was one scientist was able to cross two separate strains of yeast and come up with a new one.  Although by the 1970’s scientist were beginning to mutate and hybridize yeast, it may be with the more recent attempts to engineer yeast to convert xylose (a wood sugar) into cellulosic ethanol that some additional yeast species can confidently be described as domesticated.  Even then “engineering” is a strong word.  Yeast mutate all the time without human help.  Scientist didn’t create a new fungus but started with examples that already decomposed dead trees or other cellulose containing plant material.  By attenuating the selection process for yeast with numerous cellulase enzymes, scientists hope to produce economical automotive fuel from sawdust and other normally wasted biomass.  The quest for an ideal yeast and bacterial biomass consuming combination is still ongoing.  This particular process defines artificial selection, not gene modification.

Right now, this very moment anyone can capture wild yeast from vegetable matter or from the very air to make bread or to ferment beer or wine.  In antiquity the women folk who cooked and then later bakers, brewers and tavern keepers likely kept a portion of a previous dough or barm yeast culture as a ‘starter’ simply to hasten the development of the next batch.  While this process might support claims of artificial yeast selection throughout history, one might also be reminded that sanitation during those bygone days was questionable and that exposure to wild yeast and bacteria was probably persistent.  It has always been easy to just whip up a new yeast culture from scratch, as will be explained shortly and as revealed in several recipes from a 120 year old cookbook.


Bread, Beer & Wine

The discovery or invention of wine, beer and bread were unavoidable and early man deserves no special intellectual credit for the achievement because omnipresent yeasts and bacteria did all the work.  Consider the cavewomen that picked a bountiful harvest of wild grapes and then carted these back home in animal skins or clay-lined baskets to be consumed later.  In a few days time wild yeast and bacteria would begin breaking down the fructose and glucose from juice released from crushed grapes at the bottom of any impermeable container.  The oldest available archeological evidence of a fermented beverage comes from 9,000 year old mead (honey wine) tailings found in northern China.  Here probably someone had originally, unknowingly enabled the enzymes from yeast to work by adding water to get all the sticky honey out of a container.  Likewise the inescapable discovery of bread and beer are no mystery.  Raw fresh grain is soft and easily chewable foodstuff.  Dried grain is next to impossible to chew so ancient man was soon mashing it between two rocks to make the powder called flour.  Dry flour is not very tasty so the next obvious experiment would be to add water and later perhaps to cook the gruel in a fire – eventually inventing bread.  Obviously the first breads were probably flat breads.  The proper leavening of bread actually requires several hours of rest for fermentation to create carbon dioxide bubbles which get trapped in gluten to make bread rise.  Had someone boiled a wet soup from the flour instead and then abandoned it because it wasn’t very good, it would have eventually turned into a beer in a few days.  Perhaps the first beer or ale resulted simply from someone’s bread falling into a pot of water.  Regardless, our encounter with fermentation and the invention of both bread and alcoholic beverage was inevitable.

Briefly, Saccharomyces cerevisiae (or sugar fungus) is typical of many yeast species but is a particularly successful species because it can live in many different environments.  Few of the other 64,000 or so members in the Ascomycota fungal phylum can reproduce both sexually and asexually while also being able to break down their food through both aerobic respiration and anaerobic fermentation – all at the same time.

budding yeast

 Under favorable conditions most, but not all yeasts reproduce asexually by budding where one cell splits into two.  On average a particular yeast cell can divide between 12 and 15 times.  In a well controlled ferment aerobic (with oxygen) respiration allows “sugar fungus” yeast cells to reproduce or double about every 90 minutes.  During respiration carbohydrates donate electrons, allowing cell growth, CO2 and water (H2O) production.   During anaerobic fermentation carbohydrates undergo oxidization while ethanol and CO2 are produced.  One yeast cell can ferment approximately its own weight in glucose per hour.  Favorable ferment conditions in this context imply moisture, mineral nutrition, a neutral or slightly acidic pH environment and a narrow temperature range of 50° F to 99° F.  Most yeast cells are killed at temperatures above 122°F.

* (No yeast yet known is completely anaerobic nor is fermentation necessarily restricted to an anaerobic environment).

Under harsh or unfavorable conditions yeasts like S. cerevisiae can become dormant and reproduce sexually by producing spores.  Spores can survive for hundreds of years, perhaps indefinitely, and like many other infinitesimal items can remain airborne for years before coming back into contact with the surface of the earth.  Anyone questioning this assertion should have a look at Lyall Watson’s book, titled “Heaven’s Breath: A Natural History of the Wind”.


A typical yeast cell measures about 3–4 µm (microns or millionth of a meter) in diameter.   Dry packaged yeast as imaged above can survive a long time when refrigerated.  The 3 large bakers yeast packages pictured at the bottom are labeled as containing 21 grams of yeast each.  The 3 brewers yeast packages on top are labeled 5 grams.   Compressed yeast which would contain less yeasts per gram because less water has been removed, is estimated to contain between 20 and 30 billion living organisms – per gram.  The physical volume of that gram would be about the size of a pencil eraser.


In general, bacteria are to be avoided during normal food and beverage production, but as usual there are exceptions.   Many of the approximate 125 species of lactobacillus bacteria are closely associated with food spoilage.  Without the assistance of beneficial bacteria (several of which are lactobacillus members) however we would have no vinegar, chocolate, cider, cheese, kim-chi, pickles, sauerkraut, sourdough bread or yogurt.  Bacteria can drive fermentation by themselves.  More preferably, certain beneficial bacteria can assist yeasts in the fermentation reaction for breads, beers or wines and are sometimes deliberately used to do so.


In baking or brewing it is the enzymes that yeasts or bacteria possess or produce which catalyze chemical reactions and drive fermentation.  A mixture of enzymes might be needed to successfully break down complex longer chained carbohydrates, before either bread leavening or ethanol production is achieved.  In alcoholic fermented beverages, enzymes might be acquired from sources beyond yeast and bacteria, such as from human saliva where for a thousand years descendants of the Incas have chewed maize and spit into common vats to produce the wine called “Chicha”.  The rice wine “Sake” is made with the help of enzymes from a (non yeast) fungus mold named Aspergillus oryzae.  The enzymes used to create the Mongolian horse milk wine known as “Ayrag” or “Kumis” came from the lining of a bag sewn from a cow’s stomach.   There are far too many types of enzymes to list here but the names of some important ones often end in the suffix “ase” (as in: lactase, saccharase, maltase, alpha amylase or diastase, zymase or invertase and alpha-galactosidase).    

Sugar or starch

To briefly outline and oversimplify a topic that deserves more attention: there are many names for, and many types of, starches and sugars and enzymes needed to break them down.  There are simple sugars, complex sugars and very complex sugars or conversely one could say: ‘there are: monosaccharides, disaccharides, oligosaccharides, and polysaccharides’.  Glucose (or dextrose), fructose (or levulose), galactose, and ribose are monosaccharides and examples of the simplest sugar molecules.  Two monosaccharides are found combined in a disaccharide – as in sucrose, lactose or maltose.  Table sugar is almost pure sucrose.  An enzyme like zymase (also called invertase or a dozen other names) is needed to split sucrose into two mono or simple sugar molecules (glucose and fructose) before fermentation of ethanol and CO2 can commence.  Oligosaccharides generally contain anywhere between 3 and 9 monosaccharides.  Polysaccharides are even longer, linear or branched polymeric carbohydrates and may sometimes contain thousands of monosaccharides.  Starch and cellulose are examples of polysaccharides.

Sugarcane was originally indigenous to Southeast Asia and was slowly spread by man to surrounding regions.  In ancient times sugar was exported and traded like a valuable spice or medicine – not as a food commodity.  There was some spread of sugarcane cultivation in the medieval Muslim world but otherwise cultivation did not blossom until the 16th century when colonials reaped their first sugar harvest in the New World (Brazil and the West Indies or Caribbean Basin).  Sugar from sugar beets was never realized until a German chemist noticed that the beet roots contained sucrose.  The first refined beet sugar commodity appeared around 1802.



Leaven” is the ancient equivalent term for yeast and it caused bread to rise.  Leaven was mentioned in the Bible when Moses led the Israelites out of Egypt, and where they all left in a hurry without waiting for their bread to rise.  Flat, unleavened, unremarkable bread is served during Passover, which is not a Jewish feast or celebration but a remembrance of deliverance, simplicity, haste, and powerlessness.  “Yeast” is a younger word with roots from Indo-European and Old English words meaning surface froth, bubble, foam and boil.  In times past and probably for many centuries, housewives and or cooks usually made both bread and beer on a frequent basis, from a leaven-yeast starter that they maintained in the kitchen.  In both Medieval Europe and colonial North America many households also maintained a constant supply of “small beer” on hand for servants and children or for general consumption.  Small beer had low alcohol content but some taste and since it was pasteurized it was usually much safer to drink than the local water.  Two centuries ago some children drank small beer with breakfast just like today’s children might drink orange juice.

Almost all bread before the 1840s was probably a form of sourdough bread.  Without the help of either bacteria or refined sucrose, S. cerevisiae yeast alone cannot properly break down the starches (polysaccharides or carbohydrates) in flour, work its fermentation or cause bread to rise.  In the early 1800s, for the fist time, collective bakers began making sweet breads (as opposed to sour) by using bottled yeast skimmed off and collected from ale (beer) vats.  This renaissance in baking quickly spread outwards from Vienna, Austria.  In general, bakers started buying top-fermenting beer yeast from brewers.  Initially the yeasts were collected by skimming barm or krausen off the top of a beer vat and putting it into bottles.  In about this same time frame another renaissance or revolution was occurring in the beer world.   German brewers were learning to make lagers, which employed different (bottom dwelling) yeast and much cooler and longer fermentation periods.  At the time lagers were a taste sensation and considered a great improvement over the heaver ales.  With many brewers ‘changing horses in mid stream’ to use different yeast and processes in order to jump on the lager bandwagon, bakers in Vienna and elsewhere were left without convenient sources of sweet yeast.  To fill that void ‘press yeast’ was developed.  The forerunner of modern baker’s yeast, press yeast was first skimmed from the top of a dedicated grain mash and washed and drained carefully before being squeezed in a hydraulic press.  Modern baker’s yeast have pretty much been selected for optimum carbon dioxide production.  Such yeast would still make good ale.  Bread dough makes alcohol while fermenting but that escapes when it is baked.

* The grains corn and rice have no gluten.  To make breads with these grains rise, flour with gluten must be added. 

* “Quick breads” like biscuits, pancakes, bannock, scones, sopapias and cornbread are made with “self rising flour” or regular flour with the help of a baking power.  Self rising flour merely contains its own baking powder.  Baking powder is a mixture of soda, acid salts and starch (which helps keep the other two ingredients inactive).  Baking powder is basically a little bomb, a little electrochemical reaction for making gas bubbles; waiting only to be triggered by the addition of liquid.  


Sourdough bread

Sourdough is a vague term.  There are many ways to create a sourdough starter.  While the name implies a sour taste due to contribution of bacteria and / or wild yeast, some sourdoughs taste little different than normal commercial sweet bread.  Some sourdough starter recipes actually call for baker’s yeast to be used while others might begin with pineapple juice, potatoes or even yeast captured in an opened can of beer left on the kitchen counter top for about a week.  A characteristic practice of sourdough bread making is that a portion of the ‘sponge’ is to be retained after each dough batch and is stored in a cool place to be used as the next starter.  ‘Sour mash’ whiskey has the same connotation – part of the original yeast and enzyme culture is retained and used in the next batch – maintaining consistency of product.   In brewing “re-pitching” the yeast is similar to using a sourdough starter; a portion of the live yeast from the bottom or top of a wine must or grain mash is saved to be reused again.

In the 1840s as the first Bavarian lager technology was reaching America, gold miners were about to congregate in the California Gold Rush.  San Francisco is a modern bastion of sourdough bread patronage with some restaurants or bakeries claiming to have maintained the same starters since the Gold Rush days.  One species of lactic acid bacteria found in some sourdough is actually named after the city: Lactobacillus sanfranciscensis.  Also these starters might include species of yeast (like Saccharomyces exiguous or Candida milleri) that can leaven bread by working on polysaccharides instead of simple sucrose.

Homemade yeast

While fresh compressed yeast was becoming common in the urban food markets of Europe and America by the 1870’s, many individuals (especially those in remoter areas) simply made their own yeast.  The “White House Cook Book” was an authoritative publication ((c)1887 and before) used by ambitious housewives across the country.  The book gives several recipes for starting a yeast culture, including the use of milk or salt and even drying the yeast into cakes for later use.  One of the book’s recipes for yeast is simply titled “Unrivaled Yeast” and it resembles the following (actual recipe is on p.242):

- boil 2 oz. of hops in 4 qts. of water for 30 minutes, strain and let cool

- mix this water in large bowl with 1 qt flour, ½ cup salt and ½ cup brown sugar –let stand for 3 days

- mix this with 6 boiled and mashed potatoes – let stand for another day, stirring frequently.  

- ready to use or to be stored in bottles for future use (good if kept cool for about 2 months).

Obviously the yeasts native to the potatoes were killed by boiling, so yeasts from the atmosphere and perhaps flour as well were the ones captured.  Sanitation and sterilization of utensils was and still is important to limit the procreation of undesirable bacteria.   Hops (flowers of the Humulus lupulus plant) are frequently mentioned in these older recipes because hops which were also used as herbal medicine, act as an antiseptic \ antibiotic preservative by inhibiting bacterial growth but not beneficial yeast growth.

* The Reinheitsgebot or Bavarian Purity Law of 1487 – specified the use of only water, barley and hops – for the brewing of beer.   The contribution of yeast was not appreciated but the antibacterial benefits and virtuous bitter flavor components of hops were.  Evidence suggests that hops were being used in Bavarian beer as early as 736 in an abbey outside Munich.  The Reinheitsgebot also had the effect of discouraging competing imported Belgian beers which preferred to use gruit and of preserving the wheat harvest for those needing to bake bread for food. 

There are many, many other interesting facts to discuss about yeast, enzymes and bacteria in regards to fermentation but this post has to draw a conclusion or come to an ending somewhere.  No more time will be taken to examine yeast killing sulfides in wine, the alcohol tolerance of different yeasts, turbo yeast or how Champagne is created by secondary fermentation.  Somehow it seems that yeasts have used us just as much as we have used them.  We have changed their nature little – if at all.  For the small percentage of yeast species we have identified, we are on the verge of understanding the true nature of just a few.

Water Turbines


The Egyptians were using mechanical energy to lift water with a wheel in the 3rd century BC.   Four hundred years later in the 1st century AD Greek, Roman and Chinese civilizations were using waterwheels to convert the power of flowing water into useful mechanical energy.   The word “turbine” was coined from a Latin word for “whirling” or “vortex”.  The main difference between a water wheel and a water turbine is usually the swirl component of the water as it passes energy to a spinning rotor.  Although the Romans might have been using a simple form of turbine in the 3rd century AD, the first proper industrial turbines began to appear about 200 years ago.  Turbines can be smaller diameter for the same power produced, spin faster and can handle greater heads (water pressure) than waterwheels.  Windmills and wind turbines are generally differentiated by the reasoning that windmills turn wind-power into mechanical energy whereas ‘wind turbines’ convert wind-power into electricity.  This post attempts to reveal to those individuals with an exploitable water source that – modest advancements in ‘micro’ hydro technology have made it feasible for them to potentially create useful power from low water heads or from very modest water sources.


Above the horizontal undershot waterwheel requires the least engineering and landscaping labor to install; the width of the runner can be tailored to match the flow rate and only a small water ‘head’ is required.  The ‘breastshot’, ‘overshot’ and ‘backshot’ styled waterwheels get progressively more efficient.

Water head can be thought of as the weight of water in a static column.  Since fluids don’t compress, the weight of water in a pipe is directly related to its pressure at the bottom (measured as psi or pounds per square inch).  As a stream drops in elevation its head is a measurement of that drop.  Water weighs 62.427 lbs per cubic foot.  There are 1,728 cubic inches in a cubic foot.  A cube of water 12” high, 12” wide and 12”deep would have a psi of ((62.427 / 12) /12)  or 0.433 lbs. per square inch.   Any column of water 1 ft. high, regardless of width, still has a water head of 1 ft. and a psi of 0.433 lbs/in².   Water drop is simply multiplied by the constant 0.433 to determine the potential psi. 


Boyden turbine

A Frenchman named Fourneyron invented the first industrial turbine in 1827.  The idea was brought to America and improved upon in the form of the Kilburn turbine in 1842.  By 1844 a conical draft tube addition resulted in the Boyden turbine.  There were dozens of Boyden turbines in operation in northeast America by the time radical abolitionist John Brown raided Harper’s Ferry in 1859.   Located at the confluence of the Shenandoah and Potomac rivers, Harper’s Ferry was a national armory and a beehive of activity where gunsmiths made small arms.   In 1859 at least 2 Kilburn and 5 Boyden turbines were driving the  jack-shafts and belts needed to the power lathes, sawmills and other equipment necessary to keep 400 employes busy at the armory.

Fourneyron’s turbine and subsequent Kilburn and Boyden types were further followed themselves by increasingly efficient  turbines including:  the Leffel double turbine, John B. McCormick’s mixed-flow turbine, the New American and Special New American turbinesAll of these are known as outward flow reaction turbines (which are reminiscent of cinder, sand or fertilizer spreaders – but with water spraying out at the bottom).


A different type of turbine called an inward flow (or radial flow) reaction turbine was developed by James b. Francis in 1849.  In the snail shaped Francis turbine water is sucked into a spiraling funnel that decreases in diameter.  Used at the beginning of the 20th century mainly to drive jack-shafts and belts for machinery in textile mills, Francis type turbines soon became the type favored for hydroelectric plants and are the type most frequently used for that purpose today.  This <link to an image> apparently taken in Budapest before 1886 shows what looks to be a Francis turbine being installed in the vertical axis rather than the horizontal axis.


A “runner” is that part of a turbine with blades or vanes that spins.   As with any other turbine the scale of dimensions can be adjusted up or down to suit individual needs.   Although small Francis turbines are produced the ones used in large hydroelectric power stations are impressively huge – some producing more than a million horsepower each (1,341 hp = 1 Megawatt).   The largest and most powerful Francis type turbines in the world are in the Grand Coulee Dam (Washington USA).  The runners of the turbines there have diameters of 9.7 meters and are attached to generators producing as much as 820 Mw each.   China’s “Three Gorges Dam” is capable of the world’s largest electrical output however with 32 main generators producing an average 700Mw each for a total 22,500 MW optimum output.   Located between Brazil and Paraguay the world’s second largest dam (in terms of generating capacity) is the Itaipu dam with 20 Francis  turbines powering 700 MW generators.   In 2012 and 2013 Itaipu’s annual electrical output actually surpassed that of Three Gorges due to the amount of  rainfall and available water.


Another type of reaction turbine was developed by an Austrian in 1913 looks like a boat propeller.  Some windmills are called Kaplan turbines.  The blades or vanes on a Kaplan designed hydro turbine are adjustable, allowing the turbine to be efficient at different workloads or with varying water pressures.  Although complicated and expensive to manufacture, the Kaplan design is showing up more frequently around the world, especially in projects with low-head, high flow watersheds.  They can be found working in the vertical or the horizontal planes.  Large Kaplan turbines have been working continuously for more than 60 years at the Bonneville dam.  The Bonneville dam is on the Columbia River between Washington and Oregon, several hundred miles downstream from the Grand Coulee dam.  Both dams were started at the same time during the depression and were initiated by Roosevelt’s (FDR’s) “New Deal”.  Small inexpensive Kaplan turbines (without adjustable vanes) can be made to work in streams with as little as 2 feet of head.


The so called “Tysonturbine looks like it could qualify as a Kaplan turbine but  this modern example of micro hydroelectric technology encases its own generator in a waterproof housing.  The unit is submerged into a stream and usually suspended from a small tethered raft.  The stream can be shallow but obviously a high flow rate will encourage the best electrical generation.

Yet another type of water turbine is tenuously referred to as a “crossflow turbine.   In the early 1900’s two individuals on opposite sides of the world independently contrived about the same turbine design.  A Hungarian professor named Banki and an Australian engineer named Mitchell invented turbines that combine aspects of both a reaction (or constant-pressure) turbine and an impulse (or free jet) turbine.  The runner of a Banki -Mitchell (or Ossberger)  crossflow turbine is cylindrical and resembles a barrel fan that one might find in a forced air furnace or evaporative swamp cooler.  The design uses a broad rectangular water jet that travels through the turbine only once but travels past each runner blade twice.  The moving water has two velocity stages and very little back pressure.


Most suited to locations with low head but high flow, low-speed cross flow turbines like this have a flat efficiency curve (the annual output is fairly constant and not as much affected by fluctuating water supply as are some other designs).  Large commercial crossflow turbines are manufactured that can handle 600 ft. of head and produce 2,500 hp.  Small homemade Banki – Mitchell units have been constructed that are capable of producing about 400 watts using a car alternator with 5.5 CFS (cubic feet/sec) of water from a stream with a head of only 33 inches.  These units can make considerable noise, so to keep vibrations minimized these turbines should be well balanced and spun at moderate revolutions per minute.


Two rising celebrities in the world of mini or micro hydroelectric technology are both impulse turbines.  The Pelton wheel or runner works in the vertical plane usually, and the somewhat similar Turgo in the horizontal.  Water pressure is concentrated into a jet that impacts spoon shaped cups of the Pelton or curved vanes of the Turgo.  These systems capitalize on high head, low flow water sources.  Turgo runners are sometimes quite small (like 3 or 4″ in diameter) and are designed to run at high speeds.   A small uphill water source and enough penstock (piping) to reach it are the main requirements necessary to make one of these small impact turbines useful.   Under the right circumstances a small Pelton or Turgo wheel of just a few inches in diameter is capable of producing perhaps 500 watts.   In the absence of running streams, snow pack or plentiful rainfall an individual living in a mountainous area might still be able to collect up-slope groundwater from perforated pipes buried in boggy areas, springs or the drainage ditches alongside roads.  A long run of water hose, polyethylene or polyvinyl chloride (PVC) pipe could conduct the water down slope, which would gain another pound per square inch of pressure for every 2.31 feet of drop.  Water catchment from barn and house roofs could be redirected to holding cisterns and used by these little turbines when appropriate to augment other alternative off-GRID power systems.


Built 1901 – used to power the mining town of Victor, CO. Courtesy of Gomez.

The Pelton wheel was patented in 1880 but Lester Allan Pelton actually got the idea from using and examining similar Knight water wheels in the placer mining gold fields of 1870’s California.  Employing fluid often diverted by sluices to a holding pond before being collected into a penstock and dropping further, miners washed entire hillsides away with jets of high pressure water.  The tip end of this water cannon was a nozzle called a “monitor” and there was no ‘off button’.  Most of these hydraulic mining monitors spewed water around the clock so it was probably just a matter of time before some enterprising miner attempted to convert that wasted energy into a useful mechanical energy by spinning a wagon wheel with pots and pans attached to its rim.  While ‘Knight wheels’ (the 1st impact water turbines) were originally constructed to power saws, lathes, planers and other shop tools some were actually used in the first hydroelectric plants built in California, Oregon and Utah.  Lester Pelton’s innovation was to extract energy more efficiently from a water jet by splitting the cup and deflecting the splash out of the way.


Between the 1870’s and the 1890’s innovations for both hydroelectric turbine and alternating current development were occurring at breakneck pace.  The first hydroelectric power schemes began to appear after 1878 onward and for several years created only DC current.  In three years between 1886 and 1889, the number of hydroelectric power stations in the U.S. and Canada alone quadrupled from 45 to over 200.  AC development milestones during this period include: step up and step down transformers, single phase, polyphase or triple phase AC, and great improvements in the distance of power transmission.   <This site> provides an interesting history and timeline on the maturation of AC power.

The Ames hydro electric power plant in Colorado claims to “be the world’s first generating station to produce and transmit alternating current”.   Perhaps that claim should be amended to specify only “AC for industrial use”.  Originally the Ames plant attached a 6 foot tall Pelton wheel to a Westinghouse generator.  The largest generator ever built up to that time, it made 3,000 volts, single phase AC @ 133Hz.  The Pelton wheel was driven by water from a penstock with a head of 320 feet.  The power was transmitted 2.6 miles to an identical alternator/motor, driving a stamp mill at the Gold King Mine.  The mine owners chose this newfangled electricity over steam powered machinery because of the prohibitive cost of shipping coal by railway.  In 1905 the Ames power plant was rebuilt with a new building, two Pelton wheels with separate penstocks from two water sources and a General Electric generator of slightly less output capacity.  After 123 years this facility’s impact turbines are still producing electricity.

The success of the Ames power plant along with a well done 1893 World’s Fair exhibit by Tesla and Westinghouse helped determine a victor in the famous “War of the Currents” and more immediately, who would win the prestigious Adam’s power station contract soon to be constructed at Niagara Falls.


The main characters in the ‘War of the Currents’ were (from left to right above) the DC proponents Thomas Edison and J.P. Morgan and their AC rivals Nikola Tesla and George Westinghouse.  Pride, patents, reputations and big money were at risk in this somewhat ridiculous conflict.  At its peak the quarrel was exemplified by Edison going about the country and staging demonstrations wherein he electrocuted old or sick farm & circus animals with ‘dangerous’ AC current.  It is rumored that the electric-chair used for executions was itself created due to a secret bribe from Edison.  In response Tesla staged some carefully controlled demonstrations where he shocked himself with AC to prove its safety.  In truth both DC and AC currents are potentially deadly at higher voltages, but AC may ‘win out’ slightly because its alternating fluctuation might induce ventricular fibrillation (where the heart looses coordination and rhythm).

* For those that may not know: Edison was a prominent inventor who formed 14 companies and held 1,093 patents under ‘his’ name although his formal education consisted of only 3 months schooling.  The largest publicly traded company in the world (General Electric) was formed by a merger with one of Edison’s companies.   JP Morgan was one of the most powerful banker/ financier/ robber barons in the world in the 1890’s.   JP reorganized several railroads, created the U.S. Steel Corporation, and bailed the government and U.S. economy out of two near financial crashes – once in 1875 and again in 1907.  He was also self conscious about his big nose and did not like to have his picture taken.   Recognized as a brilliant electrical and mechanical engineer Tesla never actually graduated from his university.  Immigrating to the U.S. in 1884, Tesla even worked for Edison before the two had a falling out.   Westinghouse attended college for 3 months before receiving the first of his 100 patents and dropping out.  He went on to found 60 companies.  

Although AC has been the favored method of current transmission for the last century, in the War of the Currents, DC power never fully capitulated.  Considering storage benefits, DC may someday stage a spectacular comeback.  In Cities like Chicago and San Francisco an old DC grid may run parallel to its AC complement.  Most consumer electronics convert AC into DC anyway.  DC offers some advantages over AC, including battery storage which provides load leveling and backup power in the event of a generator failure.  There is no convenient way to store excess AC power on the GRID so it is shuffled around for as long as possible.


Alternating current originally offered advantages over direct current in its ease of transmission.  High voltage / low current travels more efficiently in a wire than low voltage / high current will.  The introduction of the transformer (which works with AC but not DC) allowed AC to be “stepped up” to a higher voltage, transmitted and then stepped back down to usable power at the destination.  DC current (under the Edison scheme) had to be generated very close to its finial destination or otherwise use expensive and ungainly methods to achieve transmission over longer distances.  Voltage drop (the reduction of voltage due to resistance in the conducting wire) affects both currents equally.  Due to resistance, some power will be lost as heat during transmission.  AC suffers from a resistance loss during transmission that does not affect DC.  “Skin effect” is the tendency of AC to conduct itself predominately along the outside surface of a conductor rather than in the conductor’s core.  The whole wire is not being used – just the skin.  This skin effect resistance increases with the frequency of the current.  This phenomenon along with new technology for manipulating DC voltages has recently encouraged several companies to construct new High Voltage Direct Current (HVDC) power lines for long distance transmission.  The Itaipu Dam mentioned earlier for example transmits HVDC over 600 kV lines to the São Paulo and Rio de Janeiro – some 800 km away.

The huge dams built in the U.S. were not created to provide electrical power to customers but to control and redirect water for the purpose of agriculture.  Even today the bulk of power created by those dams is used to pump water back uphill so that it can be broadly distributed by irrigation.  In 2008 the U.S. Energy Information Agency (EIA) estimated that only 6% of the nation’s power was generated hydroelectrically and that amount has changed little in the last 5 years.  The EIA  does predict a growth in the future for photovoltaic and wind generated power.  Canada with a much smaller population supplies itself with a greater percentage of hydroelectric power than the U.S. and also has more kinetic energy available in terms of exploitable water resources.


- Wind turbines, water turbines, Archimedes screws and centrifugal pumps in reverse can be mounted to the same types of alternators or generators.  Small or miniature turbines can be affixed to a wide range of DC motors from tools, toys, treadmills, electric scooters, old printers, stepper motors and servos.  Commonplace AC induction motors from laundry machines, blowers, furnaces, ceiling fans, tools and other sources can be converted into brush-less low rpm alternators by rewiring or installing permanent magnets in the armature.  Usually but not always in a modest off-grid power scheme AC current from an alternator or magneto needs to be rectified into DC so that the energy can be stored in a deep cycle ‘battery sink’.  Automotive alternators contain their own rectification but these are less than ideal for turbines for a couple of reasons.  Charge controllers and inverters are also pertinent subjects in the discussion of alternate energy.  These topics may be addressed in some future post.  For now a final image (of a rectifying ‘full wave bridge’) and some miscellaneous video links are offered.


CIVIL 202 – Pelton wheel project –  3 minute video – school science project

Micro Hydro Electric Power- off grid energy alternatives – 7.5 minute video – something of an advertisement

Home Made Pelton Wheel – rather long 12 minute video

Turning Green In Oxford – 9 minute video / power by Archimedes screw

Algonquin Eco-Lodge – 8 minute video – generating by reversing water flow through centrifugal pump

Hot Stuff 2

Good bronze sculptures are still being made today.  source:Google free to use or share filter

Bronze & Brass

As the ancients toyed with fire they created glasses and ceramics and discovered several metals.  In antiquity only about 7 elements (gold, copper, silver, lead, tin, iron and mercury) were recognized as being metals.  Although their ores were used to create alloys – arsenic, antimony, zinc and bismuth were not determined to be unique metals until the 13th or 14th centuries AD.  With the addition of the discovery of platinum in the 16th century (ignoring the Incas who were apparently smelting it earlier) that makes a total of only 12 unique metals (out of 86 known today) mankind recognized before the 18th century AD.   Gold and copper were the first metals to be widely used.  Because of their low melting points the first metals to be smelted however might have been tin and lead.  Lead beads unearthed in Turkey have been dated to about 6,500 B.C.  Galena (lead ore) is fairly common and widely dispersed whereas tin oxide (as in cassiterite SnO2) is relatively rare.  Neither gold, copper, tin nor lead however were as influential in the course of human events as was bronze; a hard alloy which upon occasion has been mixed from all four simultaneously.

The progression of mankind’s technological advancement is broadly categorized into the Stone, Bronze and Iron Ages, the Renaissance, the Industrial Age and presently what some are pleased to refer to as the “Information Age”.  Erudite scholars from some future millennium however may look back and call ours simply – the “Plastic Age”.   The “Bronze Age” is subjective term depending upon location or culture but it implies the mining and smelting of copper that has been hardened with an alloy to make bronze weapons and tools.  The Bronze Age occurred around 3,150 BC (or BCE) in Egypt, 3,000 BC in the Aegean area, 2,900 BC in Mesopotamia and about 1,700 BC in China.   Gold, copper, lead (and silver to a lesser degree because it is found in the same galena ore which produces lead) metalworking dates from about 6,000 BC onward and so actually predates the so called ‘4th millennium BC – Bronze Age’.   The first gold and silver coinage comes much later in the 7th century BC in Lydian, Persian and Phoenician cultures.   The first exchangeable copper coinage might have appeared in Greek and Roman societies.

* Both copper and silver ions are germicidal and can inhibit or kill bacteria in water.   The ancient Egyptians employed copper medicinally.  Some modern hospitals incorporate copper in the form of water faucets and doorknobs.

Copper Alloys

We usually think of bronze as being an alloy of copper and tin and brass as being merely an alloy of copper and zinc but as usual the real picture is more complicated than that.  There are many alloys of copper that are called bronze or brass and occasionally the distinction is not clear.  There is tin bronze, leaded tin bronze, manganese bronze, silicone bronze, phosphor bronze, aluminum bronze, arsenic bronze and beryllium copper.   ‘Red brass’ copper alloy historically used for casting cannons actually contains about 5 times more tin than it does zinc.  Many modern coins (like the American dime, quarter and half dollar) have a copper core sandwiched between two layers of cupronickel (an alloy of 75% copper & 25% nickel).  The Swiss franc and American nickel (5 cent piece) are actually solid homogenous cupronickel.   There are a large number of distinctly recognized mixtures of brass as well.  ‘German silver’ (or nickel silver or nickel brass) is similar to afore mentioned cupronickel but contains about 2% zinc.  ‘Muntz metal’ (copper, zinc and a trace of iron) is an alloy thought up a couple of centuries ago to provide a cheaper ship hull protective sheathing than the copper one which it replaced.   ‘Nordic gold’ is an alloy of 89% copper, 5% aluminum, 5% zinc, and 1% tin – that is used in several Euro coins.

Two Egyptian bronzes source: Google free to us or share filter

Two Egyptian bronzes
source: Google free to us or share filter

The first bronzes were arsenic bronzes.  While this might be attributable to the fact that the copper ores then smelted usually contained some indigenous arsenic, at some point early metalworkers deliberately added more arsenic to make a harder alloy.  Arsenic bronze is much harder than the original – excessively ductile copper and allows for the creation of useful tools, weapons, body armor and sculptures that will stand up under their own weight.  At some later point in history, tin gradually supplanted arsenic as the preferred alloy for bronze.  Tin bronze is not harder or mechanically superior to arsenical bronze.  It is likely that tin ore (which was scarce and required many civilizations to acquire it by trade) produced an alloy that required less work hardening to produce a sharp sword or an alloy that would fill a casting faithfully without leaving voids.  Arsenic sublimates (does not melt) at a temperature lower than molten copper and some arsenic oxide could be lost during casting.  Arsenic vapors are unhealthy and can attack the eyes, lungs and skin.  The use of tin probably afforded more control over the forging and casting processes.

* Arsenic (atomic element #33) is a metalloid that is used to harden both copper and lead.   Modern lead/acid car batteries usually feature some arsenic as well as some antimony within their lead components.  Long called the “Poison of Kings and the King of Poisons” arsenic is also a common and widespread groundwater contaminant.  Arsenic compounds were used as a vesicant (blistering agent) and or vomiting agent in Lewisite and Adamsite gasses used after WWI.   Arsenic is also used in the green pressure treated wood preservative known as CCA (Chromated Copper Arsenate).  Within CCA copper acts to slow the decay caused by fungus and bacteria, arsenic kills insects and chrome just helps bind or fix the other two to the wood.   When used as a discreet poison the “poudre de succession”  is not deadly in small amounts but can stay in the body and accumulate before it becomes lethal.  Two early developed analytical test in forensic toxicology were concerned with determining the presence of arsenic, namely the Marsh test and the Reinsch test.  Significant concentrations of arsenic in ground water are found in parts of New England, Michigan, Wisconsin, Minnesota, both Dakotas, Bangladesh, Vietnam, Cambodia, and China.

 * In ancient times lead was too pliable or ductile a material to make useful tools but that very characteristic allowed the Greeks and Romans to hammer and roll plumbing pipes to conduct water.  In Rome lead was used to line water cisterns or to pipe water to public drinking fountains or into the homes of the very rich.  In Rome the possibility of lead poisoning would seem to have been greatly reduced due to calcium buildup within the pipes.  Rome sits upon or near large limestone and travertine deposits.  A better source of lead poisoning if one were to be elected would be from lead dinnerware and acidic foods.   Wine for instance can easily leach toxic lead from goblets and cups.  

* Incidentally, bronze swords were often preferable to wrought iron swords.  Even in Roman times, officers commonly carried bronze weapons while the rank and file carried iron weapons.  Perhaps bronze swords were superior or simply more prestigious than their iron counterparts because they looked less crude and did not rust.  The Hittites (c.2000 – 1200 BC) are generally regarded as being the first iron-smiths.  Although their iron weapons were less brittle than hardened bronze weapons, these still had to be beaten or wrought from a bloom of roasted, not melted ore.  The inability to cast iron with a socket complicated the attachment of spear and arrow heads to their shafts.  It took about 3,000 years for furnace smelting technology to progress from copper melting temperatures to iron melting temperatures.  The first iron ore to be exploited was probably “bog ore” – a precipitate of iron oxide found in marshy areas, created by bacterial action and the decomposition of iron minerals.  Gradually the mining of rich hematite and magnetite ores occurred.   The Greeks used wrought iron beams in the construction of the Parthenon (between 447 – 432 BC).  The Romans occasional used T-shaped wrought iron girders in construction (as in the Baths of Caracalla).  Eventually it was realized that the carbon from charcoal created a stronger iron (steel).


the “Artemision Bronze” – circa 460 BC
source: Google free to use or share filter

The Greeks were masters of bronze casting.  Although the Greeks probably cast as many bronze sculptures as they chiseled from stone, the stone statues have remained where the bronze statues have not.   Every time a new war came along, bronze statues were hacked to pieces to provide the valuable metal needed to forge new weapons and body armor.   Very few Greek bronze statues have survived the ages and those that have been discovered in modern times (as in the image above – excavated in 1928) have been found underwater.   The Greek “Riace bronzes” date from about 460-450 BC and were discovered by a snorkeler just offshore of Riace, Italy in 1972.  A link is provided to this website because its fine pictures of the Riace bronzes can be enlarged.

Bronze alloys used for casting have the innate ability to expand slightly before they set and therefore all the fine nooks, crannies and scratches inside a mold are filled in with detail.  Life-sized hollow sculptures were made by a process known as the “lost wax process” (or as “investment casting” in jewelry-making or  industrial vernacular).

Horses of St Mark's or Triumphal Quadriga  from fotopedia image: courtesy of Nick Thompson

Horses of St Mark’s or “Triumphal Quadriga”
from fotopedia image: courtesy of Nick Thompson

It has not been determined if the horses of the “Triumphal Quadriga” are of Greek or Roman origin.   They are presumed to date to the 4th century BC and were stolen from the hippodrome in Constantinople where they had long resided, by Venetian troops following the Fourth Crusade (1202 -1204).   Napoleon stole the horses from the Venetians in 1797 and took them to Paris but they were returned to St Mark’s Basilica in Venice following the Battle of Waterloo in 1815.

Corinthian helmet : 500–490 BC

Corinthian helmet : 500–490 BC

The iconic Greek Corinthian bronze helmet is an enigma to modern intellectuals because they can’t determine how it was constructed.  Our understandings of the construction of other helmet designs of the period are less controversial.  Neither the process of casting nor forging by hammer stroke alone adequately explain how the classical Corinthian helmet was built.  The best explanation seems to be that it was a product of both.

Most bronze or brass alloys are denser and heavier than iron or steel.  In the 17th century almost all naval guns and terrestrial cannon were cast in bronze.  Bronze was the best material for the purpose but while being more durable than iron, it was also more expensive.  With the beginning of the 18th century, technology allowed the displacement of bronze artillery with more affordable cast iron pieces needed to supply growing armies and navies.  The weight of cannon and their placement aboard fighting ships (heaviest at the bottom) was an important consideration but weight was perhaps a more important concern for armies that had to tote them over hill and dale, streams and rivers.   Early French cast iron naval guns were notoriously dangerous and exploded with frequency while British, American, Swedish and Russian cast iron naval cannon were usually much superior.

The famous or iconic American Liberty Bell is an interesting story in bronze failure.   The metal in the Liberty Bell was cast not once, but three times: first in an English foundry when the bell was commissioned by the Pennsylvania Assembly in 1751, and then twice again by American foundry workers John Pass and John Stow.  The bell has a diameter of 12’ around its rim and weighs 2,080 lbs.  The bronze composition consist of 70% copper, 25% tin and a remainder a mixture of lead, zinc, gold, silver and arsenic.  The bell gained its moniker “Liberty Bell” from zealous abolitionist in the 1830’s, not from association with the Revolutionary War.  The one ton bell traveled or toured a lot considering its weight and there is disagreement over when its crack began.  Vigorous ringing encouraged a hairline fracture in the brittle alloy to grow into a wide crack.  The Liberty Bell rang last on Washington’s birthday in 1846, its sound after that no longer being acceptable.

The first brasses seem to have appeared somewhere around 500 BC and are sometimes referred to as calamine brass (calamine is zinc ore containing zinc carbonate or zinc silicate).  In early brasses calamine ore was introduced to molten copper and the zinc was readily absorbed, producing an attractive and useful alloy.  Zinc melts at 787 °F which is a temperature not much greater than that required to melt lead and which can be produced by a simple campfire.   Zinc boils and turns to vapor at 1,665 °F (907 °C) which is still lower than the 1,984 °F temperature needed to turn copper into a liquid.   For a long time not appreciated as a metal because heat caused it to escape as a vaporous gas, zinc production did not begin until about the12th AD century in India, the 16th century in China and (in large scale production) after 1738 in Europe.   In modern day zinc smelting, zinc sulfide is first roasted into an oxide called ‘zinc calcine’.   From there either electrolysis or any one of several complicated processes involving sintering (the electrothermic fusing of powders) or even the distillation of zinc fumes might be employed to retrieve the metal.


What usually distinguishes a brass from a bronze is the presence of zinc and a brighter or attractive golden color.  Brass is a softer, more malleable alloy than bronze and has some properties that make it uniquely desirable for some applications. Brass is used in bearings, gears, valves, locks, keys, doorknobs and clothing zippers because it has a low friction coefficient.  Brass does not spark as other metals might when struck.  Because of its desirable acoustic qualities and malleability, brass is the favored material for several musical instruments – especially horns.  Brass is the favored material in ammunition cartridge casings for a couple of reasons.  First, brass has the capacity to expand and contract quickly.  When a cartridge is fired in a firearm, the brass expands to fill the breach and prevents hot gasses from escaping rearward.  The brass then contracts to allow the casing to be ejected.  This action occurs quickly enough to allow for high cyclic rates of fire in machine guns.  Also brass’s softness and low friction attributes work more fluidly and cause less wear in the firearm’s steel mechanism than would any other metal.  While lead might be added to bronze to improve cast-ability, lead is added to brass to improve machine-ability.  California mandates that manufacturers of brass keys employ no more than 1.5% lead within keys sold in California, or otherwise label the product as potentially hazardous.

Do it Yourself


If the ancients were able to smelt copper, iron, gold and silver gold eons ago it seems reasonable that a lone individual should be able to duplicate that feat today.  Someone attempting to melt copper for instance will soon realize that it is not a simple task and that it takes concentrated energy to accomplish.  Above is an image of the bottom half of a homemade crucible furnace, the top has been temporarily removed.  In the center of a charcoal fire sits a crucible made from a scrap of square steel tubing that has had a bottom and two links of chain (for lifting) welded to it.  On the right side a rusty steel pipe conducts forced air from a hair drier into the bottom of the fire.


Above is the mold for the same bottom section of this crucible furnace, made from a plastic flowerpot and some tin cans.  A refractory mix was poured or tamped into the bottom 2” of the mold and allowed to dry, anchoring the wire reinforcement.  The tin cans were placed and now the mold is ready to receive more refractory cement between the large can and the plastic flowerpot.  Refractory is simply a building material that retains its integrity at high temperatures.  The refractory used was a mix of sand, Portland cement, fireclay and Perlite.  The ratios of the constituents used closely resembled this recipe.


Above is a downward image of the mold and a view of the finished result.  Note that on the left a bolt passes through a small hole in the set (or dried) bottom layer of refractory.  Presumably if the crucible were to leak during a cook, the molten metal should be able to run out the bottom and not be stuck in the bottom of the furnace.

A few notes about this furnace:  

- Even while the crucible and metal it held were white hot inside, the lid could be removed with bare hands – if done quickly.

- Fire at this heat combined with the forced air is very destructive of steel crucibles, both inside and outside.  Big flakes of iron oxide are almost guaranteed to sluf off and fall into and contaminate your precious metal.  The best crucibles are made of porcelain or graphite.

- The forced air should enter the furnace at an angle to encourage a whirl or vortex within the fire. 

- Although stoneware ceramics are fired and glazed at temperatures exceeding the melting point of aluminum, brass and copper, such ceramics cannot withstand such a rigorous acceleration in temperature.  You can expect a stoneware coffee cup crucible to shatter in a matter of minutes in such a furnace.

- The interior dimensions of this furnace are a bit too small to acquire a useful copper-melting heat from coal or charcoal alone.  There is simply not enough room for a crucible and enough charcoal at the same time.  Propane or waste oil would be better fuels for a furnace of this interior dimension.  These fuels can be introduced into the air pipe before it enters the furnace.  In the case of waste oil (any used automotive oil, diesel fuel or vegetable oil), it can be gravity fed, its viscosity potentially reduced by a lighter volatile or fraction, regulated by a simple valve and / or forced by a little additional air pressure.   

Jewelers crucible  File source: // compliments of  GOKLuLe

Jewelers crucible
File source: //
compliments of GOKLuLe

With a little effort an interested reader can find a wealth of information and instructables about crucible furnaces on the Internet.  Here are a few links to help such a reader get started.

In this video the furnace is constructed of stacked firebricks.   Brick furnace in snow.

This guy provides a good 3 part series on the construction of a backyard foundry.  In this video however he constructs his own graphite crucible.   Most people might simply purchase a graphite or porcelain crucible.  It is not necessary for a novice to go through all this trouble, but the information presented is useful.  “Making a Graphite Crucible“.

This video features a rather large furnace, requiring two men to handle the crucible.  “A Brass Casting Demonstration“.


If there is a Hot Stuff part 3 to come in the future it will discuss ceramics and glass.