Familiar Batteries



Although at first they may appear to be mundane topics for discussion, batteries are contributing in ever increasingly important ways to the modern lifestyle.   Items from spacecraft to even some failing human hearts depend upon batteries.    Although a myriad of battery types and chemistries exist, none are ideal or are very long lasting.   In almost every category the quest for a better battery solution, is an urgent one.   Here in this post an attempt is made to illuminate the construction, advantages or limitations, recycle-ability and where applicable the proper recharging of some of the most common consumer grade batteries.


The “dry cell” battery made its first appearance during the Paris 1900 World’s Fair.    From this evolved the zinc-carbon cells which were so ubiquitous during the 1960’s.   Such cells are not dry, but actually contain an electrolyte of moist paste.   So-called “heavy duty” (zinc-chlorine) cells began to appear which offered an improvement in performance by featuring purer chemicals and an electrolyte of zinc chloride.   In today’s marketplace zinc-carbon and zinc-chloride cells have been largely displaced by the more expensive alkaline cell.   So far, all of these cell types are progressive variations of the original Leclancé cell invented around 1866.  In some circles zinc-carbon cells are simply referred to as Leclancé cells.

Leclancé cell image modified from public domain source

The term “primary cell” denotes a battery which is intended to be thrown away and not recharged after use, while the term “secondary cell” denotes a rechargeable type.   Battery chargers for zinc-carbon and zinc-chloride batteries have been built in the past but their effectiveness was minimal and the process fairly pointless.  The components of these old style cells contain materials however which might be useful to an experimenter.   For example the cases of Leclancé type primary cells are made of useful zinc.  The carbon graphite rod at the cell’s center can be filed to a point at one end and when attached to a 12v automotive battery, can be used to expeditiously solder electrical connections in an emergency.


Furthermore both zinc-carbon and alkaline cells contain manganese oxide (specifically manganese (IV) oxide) ; a common inorganic pigment used in dyes, paints, ceramics and in glassmaking.   As long as 19,000 years ago in Europe prehistoric cavemen were painting cave walls black and dark brown with manganese oxide and were achieving umber, sienna & burnt sienna hues by mixing or cooking in with this, varying amounts of iron oxide.    Ammonium chloride (NH4Cl) composing the electrolyte of the zinc-carbon cell is made by the reaction of hydrochloric acid and ammonia.  This chemical has a wide range of applications.   Also known as Sal ammoniac, ammonium chloride can be found in food additives, baked bread where it acted as a yeast nutrient, salty licorice candy, cattle feed and in some cough medicines where it acts as an expectorant.   Ammonium chloride acts as a nitrogen source in some fertilizers, it can be found in the glue that bonds plywood and as a thickening agent in certain hair shampoos.   Ammonium chloride can clean a soldering iron.   It is used in some soldering fluxes and once upon a time in the past it was even used (along with the help of a little copper) to produce green and blue colors in fireworks.   Zinc chloride (in Heavy Duty cells) can also be found in non electrical, corrosive soldering fluxes.  Sometimes used as a disinfectant, in antiseptic mouthwashes and dental fillings, zinc chloride of higher concentrations can also dissolve cellulose, starch and silk.  Zinc chloride is also a frequent ingredient in military smoke grenades.


In a battery, energy density is the amount of energy stored in a given space per unit volume or mass.  Alkaline battery cells have 3-4 times the energy density and a much improved shelf life compared to a zinc-carbon cell.  Appearing on the market in the late 1960’s, these usually have an outer shell made of steel.  Although generally thought of as primary batteries and contrary to what might be stated on a label, alkaline cells can be recharged.  A small current at about 65mA, interrupted periodically will do the trick.  Commercial pulse chargers for alkaline cells are available but rare.   Alkaline cells get their name from the strong base of potassium hydroxide (caustic potash) used in the electrolyte.   Potassium hydroxide (KOH) is hygroscopic (with a high affinity for water) and is sometimes used as a desiccant.   Some shaving creams, cuticle removers and leather tanning solutions to remove hair from animal hides – employ potassium hydroxide.

The 9 volt battery is properly termed a ‘battery’ because it is composed of a bank of individual 1.5 volt cells.  The construction of the 9 volt battery has varied over the years but nowadays the most common assembly is of 6, rarely seen AAAA type cells.

Some other less common primary dry cells that won’t be discussed here include: the aluminum battery, chromic acid cell, nickel oxyhydroxide battery, silver-oxide battery, and zinc air batteries.  Again, the main difference between a primary battery and a secondary battery is the ease with which the chemical reaction within the cell can be reversed.   “A battery charger functions by passing a current through the cell in a direction opposite to that of the flow of electricity during discharge.”


It may be useful at this point to realize that the spiral wound (Jelly-roll or Swiss-roll) construction of some of the next battery cells to be mentioned and the construction of some capacitors can appear to be similar.  Over time in spiral wound batteries and capacitors alike, crystalline structure in the plate material or electrolyte eventually changes and causes complications.   Also the separators that isolate plates can deteriorate with age, eventually allowing opposing plates to make contact and short out.  When old devices like radios, stereos and TV’s stop working it is often discovered that bad capacitors caused the problem.   Simpler than a battery cell, a capacitor doesn’t produce electrons – it only stores them.  A capacitor can dump its entire charge in a split second whereas a battery cell discharges much more slowly.


Nickel–cadmium batteries (NiCd or NiCad) got their name from the chemical symbols of their electrodes.   The first NiCD battery was a wet cell created in Sweden around 1899.   Beneficial attributes of this type of rechargeable battery include its tolerance to being deeply discharged for long periods, the fact that it can withstand very high discharge rates with virtually no loss of capacity, its lower self-discharge rate, and its performance in cold weather.  Outdoor solar patio lamps are one application where NiCds work admirably.   Negative attributes of the NiCd type would include a phenomenon known as voltage depression (voltage depletion, “lazy battery” or “memory” effect).   Voltage depression in this case is attributable to increased internal resistance caused by metallic crystal growth in the cadmium.   Improper or unsophisticated recharging of NiCads is probably the main reason for the continuing decline of their popularity.  The surface of the cadmium plate in a good NiCD cell has a small crystalline structure.  When these crystals begin to grow then the surface area is reduced so voltage depression and loss of capacity result.  The crystals can grow large enough and sharp enough to penetrate the separator between electrodes.

* It is sometimes possible to temporarily reclaim “spent” NiCD cells or battery packs with a trick.  By zapping NiCd cells or battery packs with a strong DC current like that from a welder or automotive battery charger (where positive to positive and negative to negative terminals meet) the size and sharpness of the crystalline dendrites within the cadmium hydroxide electrode can be reduced and performance partially restored.   Even battery packs that seem to have been dead for several years can be recovered this way.  If not used constantly however these seem to return to their original dormant state much sooner than they should.  Nickel based cells have venting mechanisms to allow gasses to escape in the event of heat from overcharging.  Therefore the electrolyte might dry up.  As with other “dry cells” the electrolyte can migrate away from the terminals over time also.

There is not much that can be reclaimed from either a NiCd or NiMH cell.   Cadmium hydroxide is more basic than zinc hydroxide.   Cadmium (Cd / atomic # 48) is a rare, soft, ductile and toxic transition metal that is found in trace amounts in most zinc ores and is often collected as a byproduct of zinc production.  Sometimes replacing zinc for corrosion resistant coatings, cadmium electroplating of steel is common with aircraft parts.  Cadmium can be found in nuclear reactors where it controls neutrons in nuclear fission.   Red, orange and yellow paint and plastic pigments are often made with cadmium.

NiMH cells (Nickel-Metal Hydride) are similar to NiCd cells, having replaced the negative cadmium electrode with one of a hydrogen absorbing alloy.   Superior in some ways but not in others to NiCads, NiMH cells only arrived in the marketplace in 1989.   Having 2-3 times the capacity of a NiCd cell they are useful in high drain applications like the demands from digital cameras as an example.   NiMH cells however have a very high self-discharge rate (perhaps 30% a month).  That means that they lose their charge just by doing nothing.   NiMH cells exhibit much less apparent voltage depression or recharge “memory” than NiCd types but it can still occur.  Unlike NiCd cells, NiMH cells should not be deeply discharged (except upon occasion before recharging) and they should be kept “toped up” or recharged frequently.

 * The amount of energy expended by a typical “AA” alkaline battery is about 5,000 C (Coulombs -> 1C = about 6,241,000,000,000,000,000 electrons).  Rechargeable “AA’s” and some alkalines display the relative capacitance of a cell with a “mAh” (Milliamp hour) rating.  One mAh = 3.6 C and 1,000 mAh’s = 1 Amp hour = 3,600 C.  Often compared to the gas tank of a car, the voltage represents how much gas is being used while the mAh represents the size of the gas tank.  A car with a bigger gas tank will go farther but the bigger gas tank will also take longer to refill.    

*  The mAh rating stamped on an “AA” battery can be misleading if comparing different types of batteries.  New alkaline “AA’s” might have a 2,500 mAh rating, while rechargeable  NiCd’s or NiMH’s might only be rated at 1,200, 1,900, etc.  In high-drain applications (like digital cameras) however these rechargeable cells will far outlast the alkaline types even before being in need of a recharge.  Alkaline batteries are not designed for high discharge demands, and only deliver full capacity if the power is used slowly.    


* Some of the first “button” type battery cells were mercury or mercuric oxide batteries.  Used in hearing aids, watches, calculators and other small portable electronic devices mercury cells were popular and common between 1942 and the early 1990’s.  In the 1990’s the European Union and the United States began to legislate this type of chemistry out of existence.  Mercury cells had a 1.35 nominal voltage and high capacity, achieved from using an alkali electrolyte with zinc and mercuric oxide electrodes.

Cells in the Lithium battery family use lithium metal or lithium compounds in the anode but vary widely in choice of cathode and electrolyte.  Lithium cells offer higher voltage and larger energy density than most other battery types but they are also far more expensive.    Depending on its chemistry a lithium cell can provide 3.3 – 3.7v of nominal cell voltage (compared to: 1.5v for zinc carbon, zinc chloride and alkaline cells, or 1.2v for NiCd and NiMH cells).


Lithium Prismatic cells of monopolar or stacked configuration are similar to the voltaic pile in concept – with the positive and negative plates are sandwiched together in layers with separators between them.   A new way to construct multiple electrode cells is to arrange them in what is called a “bipolar configuration”.   This looks like a stacked sandwich or prismatic configuration but here the negative plate of one cell becomes the positive plate of the next cell.   Almost a play upon words is the term “bipolar” because of the historic use of this unusual metal in treating manic depression (more commonly referred to today as “bipolar disorder”).   More on this bipolar topic momentarily.

Because they are pressurized and may use a flammable electrolyte “Lithium-ion” batteries can be dangerous.   A standard lithium cell is not rechargeable but a lithium-ion cell is.  While lithium primary cells have electrodes (generally anodes) of metallic lithium, rechargeable lithium-ion battery (LIB or Li-ion battery) cells use electrodes composed of various materials impregnated with lithium ions.   [Some examples are: lithium iron phosphate (LFP), lithium cobalt oxide (LiCoO2), lithium nickel manganese cobalt oxide (NMC)and lithium manganese oxide (LMO)].   Some of the newest battery designs being contemplated by researchers are impregnating carbon nanotube cathodes with lithium on a nanoscopic scale (particles usually measuring between 1 and 100 nanometers).  In the near future we may witness the commercialization of the so-called “nanobattery”.

The capacity of Li-ion type rechargeable batteries will diminish substantially after a few years.   Li-ion cells don’t have a “memory” and don’t get confused by shallow discharges.   It is not wise to strain such a battery by frequently discharging it completely, nor is it beneficial to keep it fully charged all the time.   Quick discharges also place strain on this battery type.  Over time a regularly used Li-ion battery will suffer less capacity loss than one that is used infrequently.   These cells don’t like extreme cold but they hate hot temperatures.

Popular lithium-ion polymer batteries (LiPo, LIP) should connote cells built with a non liquid polymer electrolyte that does not leak.   Confusing the issue however, manufacturers soon expanded this meaning to include lithium cells with pouch type flexible polymer casings.

* Lithium is a very curious material.  It does not occur naturally in a pure state because it is a highly reactive alkali metal with one of the highest reduction potentials of any element.  With an atomic number of only 3, refined lithium metal would be so soft in could be cut with a knife and so light that it would float on water.  A comparatively rare element and strategically important material, it is hard to acquire and therefore costly.   The price of the metal has sky-rocked since WWII.  During that war lithium was mainly used as Hi-temp grease for aircraft engines.  Soon later it was used to stage man’s 1st nuclear fusion reaction (1952 / lithium transmutation to tritium).  In 1954 when mixed with hydrogen (as lithium deuteride) it composed the fuel of the Bikini Atoll (Marshal Islands / Castle Bravo) thermonuclear “H” bomb.  This particular test surprised its designers by being a far more powerful blast than expected (@ 15MT the greatest yield of any U.S. nuclear test) and also created international repercussions concerning atmospheric thermonuclear testing.  Low concentration but stable lithium hydroxide was stockpiled for many years due to its strategic value in the manufacture of hydrogen bombs.  In nuclear power plants or ship/submarine reactors lithium might be employed as a coolant by moderating boric acid in the absorption of neutrons.   In some underwater torpedoes a block of lithium might be sprayed with sulfur hexafluoride to produce the steam which cranks the propeller.  Lithium is used in heat-resistant glass and the manufacture of telescope lenses because lithium fluoride crystals have a very low refractive index.  Chosen for their resonance, lithium niobate crystals are used in mobile phones.  Lithium is used as an oxidant in red flares & red fireworks, as a flux for welding and soldering and as a fusing flux for enamels and ceramic glazes because it lowers their melting points. 

Sodium affects excitation or mania in the human brain so doctors and psychiatrist might often issue lithium as a mood stabilizer.  For treatment of bipolar disorder / manic depressive disorder, lithium affects the flow of sodium through nerve and muscle cells in the body.   The terms for this disorder denote uncontrolled mood swings from up to down or high to low and back.   Lithium treats the aggressive, hyperactive and manic symptoms of the disorder.   In humans amphetamines produce effects similar to the symptoms of mania and herein lay another interesting quality of lithium.   Apparently lithium battery cells (a cheap source of the metal) are frequently used as a reducing agent in the illicit manufacture of methamphetamine.  One recipe called the “Nazi method” requires anhydrous ammonia, ether, lithium and pseudoephedrine.  A more complicated recipe also uses lithium but substitutes anhydrous ammonia with ammonium nitrate, lye, salt and a caustic drain opener composed of sulfuric acid and a cationic acid inhibitor.  Double methylated phenylethylamine (Meth) and its precursor amphetamine are both built upon the plant derived alkaloids ephedrine and pseudoephedrine.   Ephedrine and pseudoephedrine (from the Ephedra distachya plant) are active ingredients found in several brands of effective antihistamine.

The largest producers of lithium are Chile and Argentina.  Large deposits of lithium have been discovered on the Bolivian side of the Andes and there is a lot of the metal dissolved in the oceans.  Acquired primarily from brine lakes, clays and salt pans where it is refined electrolytically, production of the metal is slow.  There is no standard spot price for the metal in a futures market or stock exchange.  China has become the world’s largest producer and consumer of lithium ion batteries.   Presently ever-growing in utility and popularity and expecting huge requirements of this metal in future electrical automobiles, market analysts predict that production of lithium will soon fall short of fulfilling its demand.


The construction of lead-acid automobile batteries has changed very little in the last 50-60 years.   A standard car battery has a conventional voltage of 12.6 volts achieved with only 6 cells, because the nominal voltage of each cell is 2.1 volts.   Typically in each cell alternating plates of different polarity {+ containing lead dioxide (PbO2) and (-) of plain lead (Pb)} are separated by nonconductive paper or synthetic dividers and surrounded by an electrolyte of about 35% sulfuric acid (H2SO4) and 65% water.  The electrolyte of a healthy cell should have a specific gravity of 1.265 @ 80°F.

*   During the discharge cycle of a lead-acid battery; the negative plates (lead) combine with the SO4 (of the sulfuric acid- H2SO4) to produce lead sulfate (PbSO4) and the electrolyte’s specific gravity goes down.   The electrolyte becomes weaker and the potential between ± plates diminishes.   Conversely, during the charge cycle electricity is passed through the plates forcing SO4 back into the electrolyte.   The lead sulfate is broken up as lead oxide and plain lead are re-deposited upon their respective plates.   The specific gravity and voltage (potential between plates) are re-elevated to the proper levels.

Industry nomenclature- lead acid batteries

Aside from common automotive batteries, buyers also have access to “maintenance free” batteries, “deep cycle” batteries, “hybrid” or “marine” batteries, “gelled” deep cycle batteries and “AGM”(Absorbed Glass Mat) batteries.   The chemistry of these differing lead-acid batteries remains the same but the quality or quantity of the components change.  “Maintenance free” batteries are usually just heavy duty versions of the same basic design.   Generally the construction is better, components are thicker and the materials are more durable.   Commonly the plate grids contain cadmium, strontium or calcium to help reduce water loss by reducing gas.   Such batteries are often closed systems (can’t add water or check specific gravity) and they are often referred to as “lead-calcium” batteries.

Automotive batteries are optimized to start car engines.  The hardest work they are expected to do is to start a cold engine on a cold day.  Hence they are constructed internally with many thin plates within each cell, to maximize surface area and therefore current output.  An automotive battery is designed to produce a large current for a short time.  Unless abused, a car battery is seldom drained to less than 20% of its total capacity.  Allowing this type of battery to drain beyond that point (or allowing it to self-discharge by not using it for long periods) can be very detrimental to the battery’s longevity.   By contrast “deep cycle” batteries as used in golf carts, electric fork-lifts and for boat trolling motors are optimized to provide a steady amount of current for a protracted period of time.  These can be deeply discharged (80% of capacity –although doing so strains the battery), repeatedly whereas an automotive battery cannot be.   Deep cycle batteries have fewer plates but thicker ones within each galvanic cell.  The plates have higher density plate material.  Less electrolyte and better separators are also used.  Alloys used for the deep cycle cell plates may incorporate more antimony than car batteries do.

* Lead acid batteries generally have two common ratings stamped upon them; CCA & RC.   Cold Cranking Amps (CCA) is the number of amps a battery can produce –for 30 seconds @ freezing temperature (32°F or 0°C).    Reserve Capacity (RC) is the number of minutes that a battery can deliver 25 amps at or above a 10.5 volt threshold.

Generally, a deep cycle battery will possess only one half to three quarters the cold cranking amps but twice to three times the reserve capacity that an automobile battery will contain.  A deep cycle battery can endure several hundred total (complete) discharge/recharge cycles whereas a car battery is simply not designed to be totally discharged.  This reserve capacity and discharge tolerance makes deep cycle batteries preferable to automotive types for the purpose of off-grid electrical storage purposes.  Any lead acid battery however will last longer if it is not allowed to discharge to a great degree.  A battery discharged to 50% every day will last about twice as long as one cycled to 80% of capacity daily.  For less strain and increased longevity, deep cycle batteries should probably be drained no more than 10% on a daily basis.

Hybrid” batteries or “Marine” batteries may be labeled deep cycle, but are something of an undesirable compromise.  “Gelled” deep cycle batteries offer a safer, less hazardous electrolyte in gel form but at a heftily increased price.  AGM (Absorbed Glass Mat) batteries incorporate a Boron-Silicate glass mat between plates.   Also called “starved electrolyte” batteries, the mats are partially soaked in acid.  These are less hazardous because they won’t spill or leak acid if damaged.  These sealed batteries also recombine oxygen and hydrogen back into water during charging.  The lifecycle of an AGM deep cycle battery typically ranges between 4 to 7 years.  Deep cycle Gelled and AGM type batteries can get pretty big and might cost well over $1,000 each when new.

All lead-acid automotive and deep cycle type batteries will eventually age or fail, but for a wide variety of reasons.  A normal automotive battery might age because lead dioxide flakes off the positive plate due to natural contraction and expansion during everyday discharge and charge cycles.  Shorts between plates, buckling of plates, loss of water, negative grid shrinkage, positive grid growth and positive grid metal corrosion can cause a battery to fail.  Battery aging can be accelerated by fast charging, overcharging, deep discharging, high heat and excessive vibration.  Acid stratification is a situation where weak acid is at the top and concentrated acid at the bottom of an automotive battery, and is a condition caused perhaps by a power hungry car that is not driven enough to fully charge its battery.  Sulfation is also caused by undercharging or by allowing a lead-acid battery to self discharge by sitting for a long period in an undercharged condition.   In a sulfated battery hard lead sulfate crystals will fill the pours and coat the plates.  In a few instances it may be possible to rectify sulfation in a battery but beware of false claims and salesmen selling snake oil.

* It is interesting to note that a lead-acid battery does not require sulfuric acid as an electrolyte, to work.   Alum (hydrated potassium aluminium sulfate) solutions work and alkali or base solutions may work as well.  An evident superiority of sulfuric acid is that it works as antifreeze by causing a significant freezing-point depression of water.   Alum solutions tend to crystallize as well as freeze.

Methanol fuel cell / NASA image

Methanol fuel cell / NASA image

Fuel cells

Fuel cells are similar to batteries in that they convert chemical energy into electricity.  Like battery cells, fuel cells have anodes, cathodes and electrolytes.  The main difference between the two is that the chemicals are self contained within a battery’s cell(s) but must be imported or fed to a fuel cell.  A continual supply of fuel and of oxygen or another oxidizing agent must be fed or input to the fuel cell to perpetuate its chemical reaction and electrical output.   Methanol, natural gas or hydrogen perhaps from these are the most commonly used fuel cell fuels.

Although “fuel cell technology” may seem like new buzzwords in the automotive industry an Allis-Chambers tractor was driving around under fuel cell power more than half a century ago.   A Welshman invented the first fuel cell in 1839 and below is one of his sketches.


Of the several types of fuel cells designed thus far the electrolyte (whether it be liquid or solid) chosen determines the composition of the anode, cathode and usually a catalyst as well.   Alkali fuel cells use an electrolyte of potassium hydroxide, operate around 350° F and require an expensive platinum catalyst to improve the ion exchange.   A  Proton Exchange Membrane fuel cell uses a permeable sheet of polymer as its electrolyte, works around 175° F and also requires a platinum catalyst.   Platinum catalysts are also required for Phosphoric Acid fuel cells, which use corrosive phosphoric acid as the electrolyte and work at about 350° F.   High temperature salts or carbonates of sodium or magnesium are generally the electrolyte of choice in Molten Carbonate fuel cells.  These fuel cells work at a hot 1,200° F perhaps and employ a non-precious metal like nickel as the catalyst at both electrodes.   Hotter yet, Solid Oxide fuel cells require an operating temperature of about 1,800° F before the chemical reactions begin to work.  The electrolyte in one of these cells is frequently a hard ceramic compound of zirconium oxide and the catalytic activity is enhanced by the complicated composition of its electrodes.


Homemade Batteries

In a previous post Luigi Galvani, Alessandro Volta, the voltaic pile and Benjamin Franklin’s coinage of the term “battery” were discussed.   The image above shows several ways to construct simplistic batteries.  Each of these examples exploit dissimilar metals and an electrolyte that can be either acid or alkali based.


In the image above a potential and usable current should be created once an electrolyte is poured into the can, except for one problem.  Beer and soda cans are spayed with a plastic polymer coating to prevent interaction of the beverage with the metal, and this coating would interfere with ion exchange.  In a battery cell the current carrying capacity (or power) is governed by the area of the electrodes, the capacity is governed by the weight of the active chemicals and the cell voltage is controlled by the cell chemistry.  While a strong electrolyte might produce more voltage it would also eat through the very thin wall of the aluminum can sooner.


The notion intended by the image above is that a PVC pipe holds an electrolyte (preferably a mild mixture of bleach and water) and electrodes of copper pipe and tin solder are used.  Obviously the anode and cathode must not make contact but the closer they are suspended together- the better the ion exchange will be.   House wiring (just called “Romex” by some American electricians) comes in both copper and aluminum versions and could also be applied in this fashion.


To prevent sacrificial damage of the electrodes when this type of primary is not being used, it would be beneficial to be able to remove the electrolyte.  This image above suggests a way to connect PVC pipe together so that electrolyte could be added or removed when necessary.   Eight cells would produce 12v if their nominal voltage was 1.5 volts each.


The antique Daniell cell (above) probably could have gone without mention here except that some artistic types might find it interesting.  Looking to eliminate hydrogen bubbles, Daniell (1836) came up with a battery cell that used two electrolytes rather than one.   Originally, solutions of copper sulfate (deep blue in color) and of zinc sulfate (or sulfuric acid) were separated by a porous barrier of unglazed ceramic (or plaster of Paris, later used by Bird).  Operation of a single cell (2 half cells) worked fine until the porous barrier became clogged up with deposited copper.   Later a ceramic pot inserted inside a copper jar separated the two solutions.  Because of the flow of current the ceramic eventually became coated with copper.  Even later variations of the Daniell cell included the Bird’s cell and the gravity or crow’s foot cell.  In the gravity cell the difference in specific gravity of the two solutions is all that is necessary to keep them separated.  The containers of such cells should not be jostled.  Gravity cells were the favored source of power for telegraph stations especially in remote areas, for about 90 years.  Their zinc electrode resembled a crow’s foot, the batteries were easily maintained by replacing simple components – as needed.   Modern incarnations of the Daniell cell incorporate a “salt bridge” (either a glass tube filled with a fairly inert jellified solution of potassium or sodium chloride, or filter paper soaked in the same two chlorides).  When breaching two separate containers, the salt bridge completes the circuit – allowing only ions to flow back to the anode.


Pomace wine


Making wine from fruit is very easy, usually much easier than making alcohol from grain.  In a previous post about yeast  it is proposed that early man discovered by almost unavoidable circumstance how to make this alcoholic beverage.  Although the basic process of winemaking is simple, making a consistent product from batch to batch or from year to year is more difficult and requires some science.  The physiological ripeness of grapes or other fruit, the effect of differing yeast strains and the development of tannins as wine ages can become complex subjects indeed.  This post attempts to brush by the more subtle aspects of winemaking, but still show the uninitiated novice that making a good wine can be a simple and rewarding task.  What will be referred to here as a “pomace wine” process seems to work well for white wine grapes and other fruit like peaches, plumbs and apricots.

“Must” is freshly pressed fruit juice which contains particles of skins, pulp, seeds and stems.  These solids in must are referred to as ‘pomace’.  The length of time that the winemaker might allow pomace to remain combined in the must can have a large influence in the final character of a wine.   The pigment and tannin content of a wine will be increased if the pomace is allowed to remain throughout primary fermentation.

This alternative pomace process differs from the more common practice of just squeezing and separating the juice from the pulp before beginning fermentation.   While grapes are used in this example the method probably even more applicable to wines made from most any other type of fruit.  The advantages of this pomace wine method might become self evident in terms of labor efficiency, by more desirable color and flavor in the final product and by the conversion of more sugars into alcohol.   After fermentation the wine is normally separated from the pomace by “racking” or siphoning only the clear wine from one container to another.   The leftover pomace will be rich in ethanol.  Water might be added to this residual pomace to make a second batch of wine or these wet solids might be distilled to create a “poor man’s pomace brandy” like Grappa.  If the distillation is added back to the clarified wine then a “fortified wine” (like Sherry, Port or Madeira) is created.

Grapes are easy

Yeasts thrive in a slightly acidic environment.  For wine the ideal acidity is about 0.6% which is roughly equivalent to pH 3.5.   Grapes generally come with close to ideal acidity for purposes of winemaking.  There are thousands of varieties of grapes and most will range between pH 2.80 to pH 3.84.   Fruits in general tend to be more acidic than vegetables.  Less acidic fruits like bananas and coconuts however would need to be amended with a little tartaric or citric acid prior to fermentation.  Acidity also comes into play later during the clarification of a wine.  Cloudiness in a wine is the result of suspended, electrically charged proteins & polyphenols.   To clear haziness in a wine, periodic racking, filtration and ‘fining’ or ‘clarifying agents’ can be employed.   This potentially complicated topic will be approached a little later.

Aside from having a low pH, grapes have a high monosaccharide sugar concentration.  Grapes have an abundance of easily accessible glucose & fructose which allow the ‘sugar loving yeast’ Saccharomyces cerevisiae to quickly flourish and perform its magic.  By contrast a grain wort has complex sugars or starches which require a “cracking” into monosaccharide form, before production of ethanol can commence.

1 Wash


In the above photograph the grape clusters are dunked in a mild Clorox (bleach / calcium hypochlorite) bath, next in a disinfecting sodium bisulfite solution and finally a rinse of pure water.  This process rids the grape clusters from most insects, arachnids, bacteria and wild yeast.   Finally the grapes were separated from the stems.

2 Process

grape104 (1)

Next the grapes were juiced in a food processor.   Some sources will discourage the thought of processing grapes in a blender, for fear of releasing undesirable tannins from crushed stems and seeds.  In this case however the stems were tediously removed beforehand and there is actually little probability of cracking individual seeds when the blending is done briefly and cautiously – just enough to liquefy the pulp.  Carefully controlled pressure would need to be applied in a commercial wine presses also – to avoid crushing the seeds.

grape104 (2)

Some winemakers might pour the must into a bag of cheesecloth to facilitate the easy removal of the pomace later.  Here though, the juiced pulp was simply poured into a sterilized fermentation bucket.  After the fermentation bucket was almost full then ¼ teaspoon of sulfur dioxide was mixed into the pulp and the lidded and rag covered fermentation carboy left to sit for 24 hours.  This kills remaining bacteria and wild yeast, some of which reside naturally inside the fruit.  It is important not to completely fill the fermentation bucket.   Leave an airspace of 2 or 3 inches at the top to reduce the possibility of an overflow during fermentation.   Also fermentation buckets like this have 6 U.S. gallons capacity; the excess volume is usually needed to fill a 5 gal glass carboy after a racking or transfer that leaves unclear sediments behind.

3 Oxygenate 


After the 24 hour waiting period the sulfur dioxide will have dissipated, being consumed by killing bacteria, trapping oxygen and reacting with aldehydes.   In the picture above the must has separated into sugar rich juice at the bottom and lighter pomace at the top.

4 Inoculation

Almost any type of yeast can be used but the choice will dictate the flavor profile of the wine.   Here a Canadian yeast known as ‘Lalvin  71B-1122’ was used although there are several other fine brands of commercial wine yeast to choose from.  While a Champagne yeast would produce more alcohol, this strain was picked because of its lower alcohol tolerance (about 14%).  By not consuming all the sugar from the grapes this yeast is expected to create a less dry and softer wine and to preserve or enhance the fruit flavor and add fruity esters.


Normally one could just sprinkle the yeast package over the must and stir it in, where with luck wine will be produced in about a week.  In this case however a yeast starter was created and used.   Creating a so-called ‘yeast starter’ is simply a means of ‘proving the yeast’ and of insuring a vigorous fermentation.   A couple of cups of juice were scooped out and the yeast added to that.  In a glass quart jar covered with a paper towel to allow oxygen to pass but protect against the introduction of airborne bacteria and wild yeast, with sugars to feed on the number of yeast in the starter can be expected to double every 3 hours.  With the yeast, 3 tsp. nutrient and 2.5 tsp. pectic enzyme were added to the starter solution at the same time in this instance.


* Pectic enzyme or pectinase breaks down the complex and stubborn polysaccharides (long chained sugars) found in pulp and skins. Pectic enzymes can also improve fining and filtering operations of high-pectin wines.

* Pectin is the jelly-like matrix which helps cement plant cells together.  It is a structural polysaccharide contained in the primary cell walls of plants.  Fruit ripens and becomes softer as the enzymes pectinase and pectinesterase break pectin down.  Pectin acts as a soluble dietary fiber which traps carbohydrates and binds to cholesterol in the gastrointestinal tract. Pectin separated and concentrated from citrus fruit is used as a gelling agent in jams and jellies.

* Yeast nutrient provides the vitamins, amino acids, nitrogen, potassium and phosphorus that yeast cells need to grow well.   Contents of packages labeled “Yeast Nutrient” may include: dead yeast, folic acid, niacin, diammonium phosphate, calcium pantothenate, magnesium sulphate and thiamine hydrochloride.   Homemade nutrient might be made from ammonium or potassium sulphate and ammonium or potassium phosphate plus a few vitamin B1 pills.   Plain un-sulfured molasses is full of vitamins and minerals.  In laboratories a drop of molasses water is commonly added to cultures in Petri- dishes; to stimulate yeast growth and reproduction.

While sodium bisulfite powder was used both as a sterilizing agent and as source of sulfur dioxide for wine in this instance,  Campden tablets are perhaps more popular.  Potassium or sodium metabisulfite Campden tablets are also used as an anti-oxidizing agent or to remove chlorine from water.  What Canpden tablets can and can’t do


By no means is it necessary for a winemaking novice to purchase or use a hydrometer.   The use of one though offers the brewer a little more understanding and control over the process of fermentation.   Hydrometers measure the specific gravity of liquids and different versions can be found to measure the amount of cream in milk, sugar in water, alcohol in liquor, water in urine, antifreeze in car coolant or sulfuric acid in a car battery.  Simply put for winemaking purposes here:  water containing sugar is denser than pure water and pure water is denser than ethanol. In the picture above: pure water in the beaker should read 1.000 but the fresh grape juice in the image reads a denser specific gravity of about 1.070.   This reading indicates a potential alcohol by volume (ABV) between 9 and 10% once the sugars are consumed by fermentation.  As fermentation commences the hydrometer will appear to sink in each sample, eventually reading less than the density of pure water.

5 Fermentation

Yeast cells reproduce in an aerobic (with oxygen) environment but create ethanol in an anaerobic (without oxygen) environment.   In this instance the fermentation bucket was lidded but allowed to breathe for another 24 hours before an S-shaped bubble airlock was fitted to the bung-hole.   Within 5-7 days about 70% or ¾ of the fermentation should be accomplished.   At this point (or when the specific gravity reads between 0.990 and 0.998) the young wine should be transferred to another container, leaving the pomace and sediments behind.   Either fresh water or additional fruit juice (if extra was acquired and refrigerated) should probably be added to the secondary container fill it.  This step is intended to reduce oxidization by limiting the amount of oxygen in contact with the wine.   Adding water to wine weakens it however while adding new juice might require the addition of more sulfides (which would stun the yeast).  The wine should be allowed to rest in the secondary for another 4 to 6 weeks or until it becomes clear.  At this point the wine can be bottled.

Advanced topics

Sulfides are added to wine at the time of bottling to keep it from spoiling or turning to vinegar later.   You don’t want to add too much sulfide to your wine however because it has an obvious smell and taste.  Some people have allergic reactions to sulfides but in general, health concerns regarding sulfide levels in wine are undecided.  The following link discusses how to accurately judge the proper sulfide level.  “ Should I add Campden tablets each time I rack my wine and how do I measure the level of sulfite in my wine?

This link can be ignored by the winemaking beginner but it is a good source of information.  The root url (winemaking.jackkeller.net) leads to a fairly through homepage dedicated to winemaking.   Winemaking Additives and Cleansers

White wines will generally clarify sooner than red wines.  Racking is the preferred method for clarifying wine but when haziness in the wine persist ‘fining’ or ‘clarifying agents’ can be employed.   Sparkalloid, Isinglas, egg albumen and gelatin are examples of positively charged finings whereas Bentonite and Kieselsol are negatively charged.   This link  provides more information about fining agents.


In conclusion, making wine with the pomace rather than without it is an alternative method which can offer several advantages.   Firstly this method does not require a grape press or an antique food mill or grinder.  This process also offers options for modifying a wine’s flavor and color profile which would not be available by the press method.   The pomace once separated from the wine can be re-hydrated to make a second wine or the intrepid individual might choose to produce a fortified wine or pomace brandy by utilizing these normally discarded solids.



Antennas (simple radio #2)

* Note to self:  The time for a new post is long overdue but it is not as though I haven’t had other distractions to keep me occupied.  Last week for example I had to chase the same bear out of camp three separate times during the night.  The next morning it was determined that the bear had confiscated a roll of sausage, a stick of butter, a box of cookies and a bag of marshmallows.


Generally, any antenna that is used to receive RF (Radio Frequency modulation) is capable of adequately transmitting that same RF.   Sprouting from the Italian word for the longest or central tent pole supporting a tent, “antenna” entered radio vernacular sometime after 1895 when Marconi (camping in the Alps) supported his radio’s aerial from the pole.   Aerial and antenna are usually synonymous and both are simply transducers or implements which convert one type of energy into another.   The word “aerial” however is sometimes used to refer to only a rigid vertical transducer.

* Antennae is a seldom used plural form of the noun – antenna, and might most frequently be encountered when discussing bugs.  Depending upon the type of insect, antennae might be used to feel, hear, smell, or even to detect light.  Apparently male mosquitoes employ their antennae to hear female mosquitoes from as far as ¼ mile (400m) away.

Radio antennas are thought of as being directional or omni-directional.   A directional antenna will prefer to radiate in, or receive from one direction more than it will in any other.   A vertical rod or isotropic radio tower supposedly radiates in all directions equally.  No aerial is perfectly isotropic (omni-directional) however.   In the case of a vertical tower there is a blind cone or null lobe straight up and another straight down where radiation is not sent or where reception is absent.   In the same fashion, there is no antenna that is perfectly directional.  A pictorial depiction of a directional antenna’s radiation pattern usually shows particular zones as being elongated lobes.  There are main lobes, back lobes, side lobes and null lobes of radiation pattern.

  Gain is a concept unique to directional antennas and is a measure of efficiency.   Gain is the ratio of a directional antenna’s intensity relative to that of a hypothetically ideal isotropic antenna.  A low-gain antenna sends or receives signals partially from several directions while a high-gain antenna is much more focused.   Both types have their advantages.   A high-gain antenna may need to be carefully aimed or pointed towards its target, to work.  That achieved, a high-gain antenna has a longer range than a low-gain type.   It’s a “conservation of energy”; less energy is wasted by radiating in useless directions.   Modern household satellite dishes for TV reception are examples of high-gain antennas.   Antennas on cell phones and Wi-Fi equipped computers however are low-gain types, which enables them to receive signals from many directions.


The parabolic shaped antennas used for satellite TV and radars, are usually associated with microwave frequencies.   The first parabolic antennas were constructed however, over 120 years ago when Heinrich Hertz used them to prove the existence of electromagnetic waves.   The dish or parabolic shaped element can be made of mesh, wire screen, sheet metal or mirror.   The dish is only a passive device; a reflector that collects signals and bounces them towards the active (cable connected) feed.   Monstrously huge parabolic antennas are used for radio telescopes.   Radio telescopes can be used to determine the composition of molecular clouds in space because when excited, individual molecules rotate at discreet speeds and emit radio energy as they do so.   Carbon monoxide likes to emit at 230 GHz for example.   These telescopes can be used to study all sorts of things:  black holes, radio-emitting stars, radio galaxies, quasars, pulsars, gamma ray burst, super novas and so on.   They can be used to track satellites, do atmospheric studies or to receive radio communications from distant traveling spacecraft like Voyager 2.

*  The VLA (Very Large Array) radio astronomy observatory is located in a remote area of N.M., just east of Pie Town, N.M.  The array is made of 27 independent parabolic dishes that stand about 10 stories high (82’or 25m) and are visible from space as little white dots.   Each independent dish weighs 209 metric tons (2,205 lbs x 209) and is mounted on a robust rail system (doubled – two parallel sets of standard gauge tracks) so that it can be moved.  The rails are configured in a “Y” shape.  To focus on an object or area in space the 27 dishes expand from a minimum of 600m at center to a maximum baseline radius of 22.3 miles.  These antennas can listen to a large chunk of the radio spectrum (from 74 MHz to 50 GHz / wavelengths 400 cm to 0.7 cm).  Computers are used to correlate the data from each dish into a single map; the VLA observatory itself is called an “interferometer”.  Occasionally the VLA is brought online to link with other radio telescopes around the country to form an even larger (5,351 miles long) baseline called the VLBA (Very Long Baseline Array).  These other antennas are located in Brewster, WA, Kitt Peak, AZ, Los Alamos, N.M, Owens Valley, CA, Fort Davis, TX, North Liberty, IO, Hancock, N.H, Mauna Kea, HI, and St. Croix, U.S. Virgin Islands.  On occasions when radio telescopes in Arecibo, Puerto Rico, Green Bank VA, and Effelsberg, Germany join in the whole affair is called the High-Sensitivity Array.  


Phased array radar antennas like the flat panel above actually house many small evenly spaced aerials.  The phase of the signal to each individual aerial is logically controlled, resulting in a collective beam from all the little aerials that can be amplified and focused in a specific direction almost instantly.   Quicker and more versatile than mechanically rotating antennas because they require no movement, phased arrays are also more reliable and require little maintenance.   Limited phased array radars have been around for 60 years but recent improvements and affordability in electronics has made them more commonplace.   Most new military radars being built today are phase array systems.   

* RADAR is an acronym coined during WWII by the U.S. Navy, from “Radio Detection And Ranging”.  Before that however, the British were calling the same thing RDF (Range and Direction Finding).  The most common bands used for radar are microwave bands (at the upper end of the radio spectrum between 1 GHZ and 100 GHz – the L, F, C, X, Ku, K  and Ka bands).  Radars used for very long-range surveillance however might use longer VHF frequencies starting at 50 MHz or UHF frequencies between 300 and 1,000 MHz (1 GHz).  


Omitting the simple aerial, some commonly encountered antenna shapes are shown above.  The most basic antenna type perhaps is a “quarter wave vertical”   (where the length of the aerial is ¼ of the wavelength targeted).   The simplest and most commonly encountered antenna however is probably the “dipole” antenna.   A dipole antenna is essentially just two elevated wires, pointing in opposite directions.   A dipole is fairly omni-directional unless its axis is parallel to the target emission.  A monopole antenna is formed when one side or one half of a dipole is replaced with a ground pane that is perpendicular or at a right angle to the remaining half.   A whip antenna correctly installed on a car for example, uses reflected radiation from the automobile’s body (the ground plane) to mimic a dipole.  In this instance the monopole will have a greater directive gain and a lower input resistance.

Grounding provides a reference point from which changes in waveform can be detected.  A radio tower that is constructed to transmit at AM frequencies for example must be grounded or be compensated for lack of ground, and its height or length of element is determined by the wavelength.  Certain ground soils allow good grounding to earth but others do not.  In the absence of a good ground an antenna can simulate a ground by adding drooping radials (additional elements hanging at 45°).  A typical Marconi antenna is a perpendicular ¼ wave aerial with a proper ground (perhaps the soil is moist, marshy, full of iron ore or otherwise conductive).  In this case the ground acts to provide more signal, adding the missing quarter to mimic a full half wavelength antenna.   Often two or more quarter wave antenna towers will be seen in the same vicinity.  Usually a group of similar towers like this is creating a directional array that transmits greater power in a certain direction.  Since AM broadcast (US.) wavelengths range between 1,826 ft. and 909 ft. in length it would be prohibitively expensive to erect a desirable full length or even half length vertical transmitting tower to hold up the element.  For economic reasons some large transmitting antennas therefore are laid out and polarized in the horizontal plane. 

The folded dipole is a variation of the simple dipole.  Folded dipoles are about the same overall length as a standard dipole but provide greater bandwidth, have higher impedance and can often provide a stronger signal.

  Loop antennas are generally used to conserve space.  The old TV set top “rabbit ears” often incorporated a loop in addition to the two telescoping, adjustable dipole elements.  Loops respond to the magnetic field of a radio wave, not the electrical.  A loop induces very small currents on each side of the loop and the difference between the two must be amplified usually, before any useful signal is fed to the receiver.   Loop antennas are very inefficient.  One useful property of the loop however is that is very directional, they pick up signals when positioned in one axis, but not another.  Most direction finding radios incorporate a loop antenna.   A loop by itself can determine the axis of a signal’s radiation but not forward from backward.   Direction finding radios were/are used in aircraft and boats or ships at sea to navigate with.  Modern civilian aircraft usually have an ADF (Automatic Direction Finder) box that is attached to a loop and sensing antenna combination.  In earlier days the loop was manual (turned by hand) and not automatic.  The non-directional, sensing aerial on a small aircraft might be a simple wire running from the tail, forward to the cabin.   The ADF’s electronics compares the two antennas (directional and omni-directional) to determine the signal’s phase (+/-) and therefore forwards from backwards.

Loopstick antennas (using ferrite rods) found in many small AM radios are actually examples of loop antennas.  Today “DX-ers” and radio hams might construct a shielded loop antenna, wrapping hundreds if feet of wire onto a spool.  Such an antenna would have the advantage of containing a half-wave or even a full-wave element in a small space, but it would be directional and introduce a new set of technical complications.

The Yagi- Uda antenna was invented by two Japanese scientists back in the late1920’s.  Early airborne radar sets used in WWII night fighters used Yagi antennas and were employed by almost everyone except the Japanese.  Yagi antennas have several parallel elements, some active (directors) and some not (reflectors).  The unconnected multiple elements help to improve gain and directivity.  The illustration shows a horizontally polarized, dual band antenna, once popular for analogue TV reception.  The whole thing is a combination of three separate Yagi antennas.  The longer elements are for VHF reception.  The shorter, closely spaced elements on the left half of the antenna were for UHF reception.  The shortest elements on the straight tail are directors and reflectors that act to improve the UHF gain and directivity.  The next longest elements (mounted on the vertical “V”) are UHF half-wave dipoles.  The longest elements on the right would be half wave dipoles, arranged in a “phased array” to pick up multiple channels.  Wavelengths of the FM and VHF TV bands are somewhere between 11’ and 9’ long.  The longest single element in this example would be about 5.5ft.

* Beware of salesmen selling snake oil.  There is no such thing as a digital TV antenna.  An antenna does not care how the wave is modulated; it does not distinguish between analogue and digital signals.  

* Although as of 2009 UHF TV is gone in the US., someone else will now transmit in those UHF bands (probably AT&T or Verizon).  The front half of these old antennas are still good useful for FM and HDTV reception if a local broadcaster is still transmitting on his legacy bandwidth.  The FCC is eager to grab this bandwidth and sell it to cell phone companies.  

Horn shaped antennas are commonly used at UHF and microwave frequencies.   Parabolic antennas (where the dish itself is just a reflector) often use a horn as the ‘feeder’.   Advantages of horn antennas include simplicity, broad bandwidth, fair directivity and efficient standing wave ratios.  A few large horn antennas were built in the 1960’s to communicate with early satellites or for use as radio telescopes.

Small antennae


Radio-Frequency Identification (RFID) tags are growing alarmingly in popularity and in sophistication.  This unregulated and potentially invasive technology broadcast identification and tracking information by using radio waves.  RFID tags generally come in three types these days:  active, passive and battery assisted passive.  New technology has enabled the miniaturization of these devices to a point where individual ants can host their own personal transmitter.  Many pets and livestock are either internally or externally tagged with RFID chips.  At least one version of a subdermal microchip implant (RFID transponder encased in silicate glass) about the size of a grain of rice (11mm x 1mm) was manufactured for use in humans until the year 2010.

A passive RFID tag requires an external electromagnetic stimulus before it can modulate its radio signal.   An active tag carries its own little battery and therefore transmits its signal autonomously.  A biologist might harness some animal like a sea turtle or wolf with this type of tag, and it would only broadcast for a limited time but for a greater distance.   A battery assisted passive (BAP / or semi-active) RFID tag sets dormant until stimulated, and its battery helps boost the range of the tag’s radio signal.

Even a simple, cheap passive RFID tag can hold up to 2 Kb of memory.  These contraptions use a simple LC tank circuit (a resonating inductor and capacitor).  Their antennas are designed to resonate within a certain radio spectrum.  Usually a RFID transponder resonates anywhere between 1.75MHz and 9.5 MHz – with 8.2 MHz being the most popular frequency.   Usually RFID chips work within traditional ISM (Industrial, Scientific and Medical) frequencies set aside for non-communications purposes.    ISM occupies reserved niches in the LF, HF, UHF and microwave frequencies that RFID tags can and do exploit, often without the need for a license.  The chip’s antenna picks up electromagnetic radiation from a reader or detector; converts that to electrical energy which powers the microchip which then reflects or broadcast any information held in memory-back over the same antenna.

* Passive tags, when used for electronic article surveillance are usually deactivated by frying the capacitor with an overload of voltage which is induced from a strong electromagnet at the checkout counter.  Also a few seconds inside a microwave oven will destroy most RFID chips.  Many retail items are “source tagged” at the point of manufacture, with the RFID device hidden within the packaging.  Since every vendor does not employ the same type of EAS system (or perhaps none at all) alarms can go off when customers carry or wear these still activated tags into other stores.  Some stores may deliberately not deactivate these tags; the motive of building a customer shopping database has been suggested. 


Big & rare

Up until 2010 when a certain skyscraper in Dubai was completed, the tallest manmade structure ever built was a half-wave radio mast.   Standing at 646.38 m (2,120.6 ft) above the ground and perched upon 2 meters of electrical insulator, this tower broadcast longwave radio (@ 227 kHz and later 225 kHz) to all of Europe, North Africa and even to parts of North America.   It was used by Warsaw Radio-Television (Centrum Radiowo-Telewizyjne) from 1974 until it collapsed in 1991.

The notorious ‘Woodpecker’ radio signal interfered with the world wide commercial and amateur communications and international broadcasting stations for about 13 years.  Transmitting with about 10 megawatts of power from an antenna that was about 50 stories high and 1/3 rd of a mile long (150m tall x 500m wide) the original Duga-3 antenna was nicknamed “Woodpecker” for the interfering sound  that it made.   It was using protected frequencies set aside for civilian use.   Operating from 1976 to 1989 the Woodpecker now resides within a 30 kilometer diameter region of exclusion surrounding the Chernobyl power plant.  The Chernobyl disaster occurred in April 1986 but apparently the Woodpecker continued to operate for another three years.

Their has been varied speculation about the purpose of the Duga-3 broadcast, including intentional broadcast interference, mind control experiments and weather manipulation.   These speculations are not without precedent.   The most plausible explanation of the Woodpecker signal however, is that it was simply a Soviet over-the-horizon radar (OTH) intended to detect ICBM’s at long range by bouncing itself off the ionosphere.  Apparently the Woodpecker was arrayed with other OTH systems like Duga-2 (also in the Ukraine) and a second Duga-3 built in eastern Siberia which points toward the Pacific.

A couple of videos filmed at this antenna which should provide an appreciation for scope and scale.

Climbing up the Russian Woodpecker DUGA 3 Chernobyl-2 OTH radar


Base jumpers sneaking into the ‘Zone of Alienation’ to jump from the antenna.



* During the ‘Cold War’ the term “International broadcasting” described broadcast pointed at or intended for foreign audiences only.   For 60 years now, RFE/RL (Radio Free Europe (RFE) and Radio Liberty (RL)) have been spreading anti-communistic propaganda and psychological warfare behind the ‘iron curtain’ using shortwave, medium wave and FM frequencies.  It would stand to reason that the Soviets might have wished to retaliate or block such popular broadcast.   Although mind control by radio signal seems very far-fetched, the Soviets are accused of having for many years focused microwave radiations toward the U.S. embassy in Moscow.    Perhaps the Soviets were attempting to slowly cook the Americans.  A more feasible explanation is that the microwave energy was being used to stimulate passive covert “bugs” hidden within the embassy.  In 1952 such a covert listening device now known as a passive cavity resonator  was discovered inside the U.S. Ambassador’s Moscow residence. This infamous creation known as “The Thing”  was designed by the Russian engineer and physicist Lev Sergeyevich Termen  and preformed its espionage, unnoticed for 6 or 7 years.  

* Weather manipulation using radio is theoretically feasible and supporting information will be included shortly.

Extremely low frequency (ELF) is an electromagnetic radiation range with frequencies from 3 to 30 Hz and wavelengths between 100,000 to 10,000 kilometers (62,137 miles to 6,213 miles) long.   Since ELF frequencies can penetrate significant distances into the earth and seawater they have been used by the U.S., Soviet/Russian and Indian navies to communicate with submarines at sea.   The British and French apparently also apparently constructed and experimented with ELF antennas.   Because of the extreme wavelengths, sending antennas need to be very large and the few examples that do exist are buried in the ground.  ELF transmissions were or are limited to a very slow data transmission rate (just a few characters per minute) and are usually just one way transmissions due to the impracticality of a submarine being able to trail an aerial behind it which was long enough to send a reply.   The U.S. Navy transmitted ELF signals between 1985 and 2004 from one antenna located in the fields of Wisconsin and another located in Michigan.   Due to environmental impact concerns involving everything from farmers concerned over their livestock’s behavior to disoriented whales beaching themselves en masse, the U.S. Navy abandoned its ELF effort.  They use something better now anyway.

* Miners and spelunkers can use technology called through-the-earth communications which utilizes the (higher than ELF) ultra-low frequency (ULF) range between 300–3,000 Hz.  

Plasma is conductive, ionized air or gas.  Using an array of antennas attached to powerful radio transmitters ionospheric heaters are used study and modify plasma turbulence and to affect the ionosphere.   Several of these ionosphere research facilities already exist (in Norway, Russia, Alaska, Japan and Puerto Rico) and are operated by organizations like SPEAR (Space Plasma Exploration by Active Radar), EISCAT (European Incoherent Scatter Scientific Association) and HAARP (High-frequency Active Auroral Research Program).   By heating or exciting an area of the ionosphere, air can be made to rise or to act as a reflector from which other radio transmissions can be bounced.  Theoretically then ionospheric research could, should or already does allow for enhanced radio communications, surveillance, long distance communications with submarines, weather modification and perhaps eventually even the transport of natural gas from the artic without the use of pipelines.  The feasibility of altering the course of the jet stream or of steering the course of a hurricane seems very real.  Readers wishing to learn more about this subject can find some information on the Internet.   They could start by following these two links:

Ionospheric Heaters Around the Globe – HAARP isn’t Lonely

Weather Warfare




Nomenclature in the world of knots is inconsistent in any language.  Within English some would stipulate that the tangles of cordage we commonly call knots should actually refer to only those things that are neither bends nor hitches.   Ideally a bend should join two ropes or lines together, whereas a hitch should attach a line to a post, ring, rail or something.  In general however, the term knot is used to encompass all three.


Some fundamental knot component terms include “working or tag end”, “standing line”, bight and loop.  In a bight the end and the standing line are parallel but in a loop the working end crosses over the standing part.  Other knot terminology might include: braids, bindings, coils, dog, elbow, friction hitch, lashing, lanyard, locking tuck, messenger, nip, noose, round turn, plait, seizing, sling, splice, stopper, trick or whipping.  A knot that has a draw loop is said to be a slipped knot, which is not the same thing as a proper slip knot.  When tying shoelaces for example two draw loops or bights finish the knot and provide easy untying.


The simplest knot of all is the “Overhand knot”.  Once tied in a line of rope or cordage, every knot reduces the static tensile strength or average breaking strength of that line, when tension is applied.  The proportion of knotted cordage’s breaking strength relative to its unknotted strength describes a given knot’s “efficiency“.  Efficiency is about the only common, measurable, descriptive term shared between knots, bends and hitches.  Most knots have an efficiency between 40% and 80%.  The overhand knot (ABoK#514) has an efficiency rating of 50%, which is poor because when stressed it reduces the strength of a line by half.

Several knots we are familiar with are ancient.  Long ago prehistoric fishermen were using knots to make gill, casting and trawling nets. In addition to practical knots, the ancient Tibetans, Chinese and Celts contemplated some very intricate and elaborate decorative knots.

There is by no means an authoritative categorization or listing of all knots.  Growing in acceptance, the closest thing to an authoritative list of working knots might be Clifford W. Ashley’s illustrated encyclopedia of knots.   First published in 1944, The Ashley Book of Knots list and numbers more than 3,800 basic knots, but this does not even come close to enumerating all the variants and ornamentals in existence.  There is a lively online forum on almost every subject related to knots – hosted by the International Guild of Knot Tyers.  Also there is a quick and handy online knot index which features images for some of the more common working knots.


* A tangential detour: Knot Theory

Lest the reader assume that knots are an overly simplistic or entirely trivial subject they should realize that the future advancement of computing may rely upon an underlying study of knots.  The speed of the fastest computers is approaching a limit due to the finite speed of the electron itself.  Any increased computing speed in the future may depend upon quantum field theory and statistical mechanics; mathematics that sprouted from a topology known as “knot theory” or the mathematical study of knots.  Knot theory is often applied in geometry, physics and chemistry. Topology is concerned with those properties that don’t change when an object is continuously stretched, twisted or deformed.  Topology involves set theory, geometry, dimension, space and transformation.  Topology studies spatial objects (objects that occupy space), the space-time of general relativity, knots, fractals and manifolds.  A mathematical knot is one where the ends are joined together to prevent it from becoming undone.  Inspired by real world knots, the founders of knot theory were concerned with knot description and complexity.  They created tables of knots and links (knots of several components entangled together).  Over 6,000,000,000 knots have been tabulated to date and obviously concise tabulation would be a task for a machine and not a human.


free to use or share filter



A surprising number of people are unfamiliar with or cannot tie a decent knot, when such a skill can occasionally prove to be quite handy.  A repertoire of only a dozen or so well chosen knots will stand the survivalist or Boy Scout in good stead with his contemporaries.  An effective working knot should have practical applications, it should be simple to tie and easy to remember and in most instances it should be easy to untie.  My subjective list of six of the most important and effective working knots include the slipped -slipknot, bowline, figure -8 (or Figure of Eight Loop), clove hitch, prusik knot and the trucker’s hitch.   The clove hitch and prusik knots are fundamental in that several useful variations have been built upon them.


The simple slipknot tightens as the hauling end is pulled and can become very tight and difficult to untie.  By “slipping” the knot with a bight or draw loop however, even the tightened knot will fall apart after a stout yank of the tag end.  This simple knot is appropriate in many applications including tying a hammock to a tree or fastening a horse halter to a post or rail so that it can be unfastened quickly in an emergency.


Many knots including the venerable bowline can be “slipped” in such a fashion.  For those people who encounter a mental block when trying to remember how to tie a bowline, there is an easily remembered right-hand–twist method to use.


There are many instances when a loop in the middle of a line is called for.  As an example, for safety a mountain climber might tie himself to a middleman’s knot in the center of a climbing rope.  While a simple overhand loop might suffice in this application – it could become difficult to untie after being stressed.  The addition of another twist to the overhand loop results in the so-called Figure of Eight loop which is probably more efficient and much easier to untie.  Some might consider the Figure of Eight loop (or Flemish loop) preferable to comparable mountaineering knots like the Alpine Butterfly, merely because it is simpler and easier to remember.


The granddaddy of all “ascending knots” or “friction hitches” is the venerable Prusik knot which was first created during WWI and named for its inventor.  The Prusik can be doubled (with 6 coils rather than 4) to produce more traction.  The younger Kleimheist also shown in the illustration below is also popular with modern day climbers.


Few good (simple) ascending knots for mountaineering can be tied with nylon webbing.  The Heddon and double Heddon knots shown next are exceptions that seem appropriate.


The Trucker’s hitch is an important and utilitarian cinching knot that is actually a compound construction of two other knots.  Disregarding friction, the Trucker’s hitch can tightly strap down loads on trucks, trailers, boats and pack saddles because it applies a 2:1 mechanical advantage.  The standing line employs a ring, carabineer or middleman’s loop while the cinch is tightened with the tag end.  After the cinch is drawn tight the pressure is held by pinching the bight with one hand, before finishing with a simple slipped overhand knot.


The finial knot (of the six most crucial selected here) is the excellent, general purpose ‘clove hitch’.  It is mentioned last because many admirable variations have been conceived from it, and illustrations of a few of those will follow.


Excellent for sacks and trash bags the “constrictor knot’ differs only slightly from the clove hitch, but holds more firmly.  It can be hard to untie unless intentionally slipped with a draw loop.



When wrapped around a tent stake the “taut line hitch” below is useful for tensioning a tent guy line.  To the right of that is a useful clove hitch variant that has no recognized common name or ABoK number.  Tentatively referred to as the wireline hitch here, the grip of this variant is superior to the taut line version.



A few more knots _ deserving honorable mention

Strong and efficient the ‘Palomar knot’ is useful for attaching large hooks, lures or sinkers to a fishing line.


The “Surgeon’s loop” is another simple and effective knot for attaching small lures or flies to a tiny mono-filament fishing line.  Knots like the surgeon and Palomar are cut away rather than untied after they serve their purpose.


The “Ossel hitch” is an ancient knot; no one knows how old. It is or was a simple, secure and effective knot used to suspend gill nets from a larger line.  Strangely the ossel hitch is not recognized in Ashley’s encyclopedia.  This may be because “ossel” is a Scottish word and was not that familiar when Ashley illustrated his book.  There is a similar but different knot in the encyclopedia known as the “Netline Knot” (ABoK #273) that hails from Cornwall on the southern coast of England.


This simple Anchor Bend variant below is easily remembered and is much more secure than the parent knot.


Finally, this old page construction below introduces a couple of utilitarian gripping hitches




This is a blog post and not an encyclopedia therefore most knots cannot be shown.  Returning to the off topic tangent of knot mathematics we come to a group of abstract ideas known as graph theory which foreshadowed or laid the foundation for topology.  The father of graph theory was a Swiss mathematician and physicist named Leonhard Euler.   Leonhard discussed a notable historical problem in mathematics called “The Seven Bridges of Konigsberg”.  The unsolvable problem was to walk through the city, crossing each bridge once and only once.  What is called Euler’s solution became the first theorem of planar graph theory.


* Back in 1735 the seven bridges of Konigsberg were real and that city was part of the Prussian Empire and bordered Poland on the Baltic. Konigsberg, Prussia became Kaliningrad, Russia (54°42’12” N, 20°30’56”E) sometime after WWI. After the breakup of the Soviet Union, Kaliningrad and surrounding province became physically separated from the rest of Russia. After another world war and the ravages of time only two of the original bridges from Euler’s time survive. Five bridges now connect the city and islands formed by the Pregel River.

A similar conundrum that Euler might have considered had he the chance is the hypothetical house with five rooms and sixteen doors. The object is for a person to walk through each door once, but one time only.


Finally we come to the perplexing Mobius strip and Trefoil knot. The naughty Mobius strip is something of a paradox. The single edge of a Mobius strip is topologically equivalent to the circle and mathematically it is non-orientable.


A physical Mobius strip can be constructed from a belt or strip of paper.  One simply grabs the two ends and gives one end a half twist before taping the two together in a loop.  The resulting surface then has only one side and one edge.  Imagine a miniature gravity defying car driving around the surface of the strip.  If the car began on the top side of the surface then its path after one revolution of the loop would place it on the bottom side of the surface.  Consider a bug dragging a paintbrush while walking along the right edge of the strip and making two revolutions of the loop.  We perceive two edges to the strip but realize there is only one.

M.C. Escher incorporated the Mobius strip in some of his graphical art.  In the real world recording tapes and typewriter ribbons have been spliced in the continuous-loop – Mobous strip fashion to double playing time or ink capacity.  Large conveyor belts have also been wrapped the same way, to increase belt life by doubling the surface area.  The Mobius strip has several curious properties.  A continuous line drawn down the middle of the loop will be twice as long as the same loop.  Cutting this paper loop down the centerline will produce one long loop with two twists (not two strips) and finally two edges.  Cutting this longer strip again as before, will produce two strips, each with two full twists and intertwined together.


In topology the “unknot” is a circle and the “trefoil knot” is the simplest knot. Named after the plant that produces the three-leaf clover, the trefoil knot can be tied by joining together the two loose ends of a common overhand knot, but this results in a knotted loop.  Although it doesn’t look very convincing when done with paper, a trefoil knot can also be constructed by giving a band of paper three half twist before taping the ends and then dividing it lengthwise.



Solar energy at home

Most of the energy we earth bound humans consume comes directly from the sun, exceptions being atomic fission and some types of chemical reactions.  Fuel oil, coal and natural gas energy that civilizations use exist because of the Sun’s previous contribution in the formation of those hydrocarbons.  Wind currents are caused by the sun warming the air and as thermals rise they are displaced by denser, colder air.  Likewise the sun’s energy is ultimately responsible for distributing snow melts and rainwater water to higher elevations, which create the kinetic energy needed to power watermills and hydroelectric generators.  On a small personal scale, more individuals are learning to exploit the sun’s energy to heat their homes, generate their own power or to cook their food.  The two main methods of acquiring power from the sun are photovoltaic (PV) cells and thermal energy collectors.

Almost 53% of the energy in sunlight is absorbed or reflected before it even hits the surface of the earth.  The glazing or protective substrate in a solar collector can further diminish the amount of energy obtained.  Even the best solar panels can be considered to be inefficient.  The amount of energy collectible by a given solar panel is subject to many variables.  Whether talking about heat or electricity we generally measure that energy in units of Watt-hours (energy = power x time).  Under the best and brightest conditions a panel might collect as much as 2,000 Watts per sq. meter but under realistic or averaged conditions the expectation might only be half that.  During the daylight hours of a normal summer day at 40 degrees latitude, a solar collector would be doing good to average 600 Watts per sq. meter.  In wintertime for the same location the same collector might gather an average of only 300 Watts per sq. meter.  For any random location around the earth the average collectible solar energy per mean solar day (24 hours) is only about 164 Watts per square meter.


Overview of PV

In a photovoltaic solar cell an electrical charge is generated when photons excite the electrons in a semiconductor.  There are many types of solar cells and even some new developments in technology which will hopefully lead to the future manufacture of more affordable photovoltaic solar panels.  The warmer the photovoltaic solar panel gets the less power it can produce.  Essentially the temperature doesn’t affect the amount of solar energy a solar panel receives, but it does affect how much power you will get out of it.

The most common photovoltaic solar cells are made by chemically ‘doping’ a very thin wafer of otherwise pure monocrystalline (single-crystal) silicon.  In a delicate and complicated process of fabrication, wafers of silicone are generally cut or sliced as thinly as possible (before they crack) to a thickness of about 200-micrometers or the width of a typical moustache hair.  Since each individual solar cell produces only about 0.5V, several cells must be wired together to produce a useful photovoltaic array.  Mostly produced in China, commercial photovoltaic solar panels are very expensive, averaging $2 – $3 cost for every single watt they produce.  An average U.S. residence consumes something like 30.6 kWh per day, 920 kWh per month or 11,040 kWh /year.  In a country like the U.S. where grid power is comparatively cheap (averaging 10 cents per kWh in 2011) it would take a very long time for photovoltaic panels producing equivalent energy to pay for themselves.  In the meantime an individual with a “do it yourself” mentality can more directly utilize solar energy by fabricating his own contraptions to collect heat.


Solar Ovens

Although it would not be considered a quick process, it is easy to cook food with direct sunlight.  Slow cooking oftentimes creates superior dishes with the best blend of flavors.  Some heat trap type solar ovens can easily produce temperatures over 250 deg F; sometimes up to 350 deg F.  No matter what type of oven is used however (electric, gas, solar, smoke pit or Dutch) a good cook knows that slow cooking with a modest heat over a long period, will make an otherwise tough piece of meat more tender.


Essentially there are only two types of solar oven; those that entrap heat or those that reflect it.  To form a simple ‘heat trap’, a cardboard or wooden box can be insulated, spray painted black inside and then lidded with glass or clear plastic.   It helps when the cooking vessel itself is dark also – to better absorb solar heat.  In addition to being dark, it helps when pots are thin and shallow and have tight fitting lids.  Even glass mason jars make useful solar cooking utensils.  These can be spray painted black and the lids can be unscrewed a bit to allow vapor pressure to escape.   It might seem that parabolic or concave reflecting cookers would be complicated to construct, but some examples have been made by simply surfacing the inside of umbrellas or parasols with aluminum foil.  Mirrored Mylar or similar BoPET films are also useful materials in this type of application.  Doubtless many examples or ‘instructables’ detailing the construction of reflective type solar ovens, exist elsewhere on the Internet.  Some specially constructed reflective ovens claim to be able to reach temperatures of nearly 600 degree F.

The importance of cooking some foods, especially meats, is to kill bacteria.  Bacteria won’t grow below 41 deg F or survive above 140 deg F.  The internal temperature of meats needs to reach a range between 140 deg F and 165 deg F to be considered safe.  Seafood needs to be cooked to 145 deg F or hotter.  To rid poultry of salmonella, poultry must reach 165 deg F on the inside.  Egg dishes should reach the same temperature.  Trichinosis is halted by cooking pork to about 160 deg F.   Ground beef should reach 155 deg F for safety.


Solar stills

Back in the 1960’s a pair of PhD’s working in the soil hydrology laboratory for the USDA invented a solar evaporation still that could suck useful drinking water out of the ground.  Even in the arid desert around Tucson, Az. where they were located, they realized that the soil entrapped useful moisture.  Such a solar still is made by digging a pit in the ground, placing a collection pot in the bottom and covering the hole with a sheet of plastic.  Additional moisture could even be gathered by placing green vegetation under such a tarp.

It seems that the first evaporative solar stills were invented back in the 1870’s to create clean drinking water for a mining community as explained in an earlier post in this same blog named “The Nitrate Wars”.   This same distillation where moisture is evaporated before the condensation is collected, is employed in affordable, plastic-vinyl inflatable stills that can equip small boats and survival craft at sea.  Where once stranded fishermen and sailors faced a death by dehydration they now have the opportunity to create the drinking water they need from seawater.  Muddy or brackish germ infested groundwater can be reclaimed in the same way.


There are several possible techniques to employ and efficiency factors to consider when fabricating an evaporative solar still.  Obviously good direct sunlight is essential to their efficient functioning.  The ‘basin type” solar still is the most common type encountered and somewhat resembles a heat trap solar oven.  In a “tilted wick” solar still, moisture soaks into a coarse fabric like burlap and climbs the cloth before it eventually evaporates.  In higher latitudes ‘multiple tray’ tilted stills can be used, where the feed water cascades down a stairway of trays or shelves, allowing closer proximity to the glass and enabling steeper tilt angles for the panel to capture optimum sunlight.



Other liquids besides drinking water can be refined in an evaporative solar still.  Ethanol can and has been concentrated from mashes, worts, musts or washes using a solar still.   Since a distiller usually desires more direct control over temperatures however, he might consider solar stills to be practical only for so-called “stripping runs”.   Some of the earliest perfumes were created from fragrances collected by distillation.   Soaking wood, bark, roots, flowers, leaves or seeds of some plants in water before distilling the mixture, is a common way of obtaining aromatic compounds or essential oils.   Not all plant fragrances should be distilled but eucalyptus, lavender, orange blossoms, peppermint and roses commonly are.   The lightest fractions or volatiles of petroleum (like gasoline) separate at temperatures available in solar stills, but the heavier ones will not.  Theoretically it should be possible to place slip or crude oil into a solar still to separate out the gasoline and higher fractions.


Solar water & air heating

Most readers will have experienced how water trapped in a garden hose will get hot on a summer day.  Portable camp showers are simple black water bags, suspended at a little elevation and in direct sunlight to warm the water.


Where climatic conditions permit people may employ gravity fed or pump pressurized waterlines and tanks on rooftops or simply along the ground to achieve the same solar water heating effect.  Others may construct or install dedicated solar heating water panels to heat swimming pool water or to pre-heat water before it enters their home’s gas or electric water heating tank.


The construction of a solar water heater and a solar air heater can be very similar in concept.  Basically air or water is conducted through pipes or conduits to a panel where the heat exchange takes place.  Copper pipe might be the most desirable material to use in a solar water panel because of its pressure holding ability, resistance to corrosion and longevity.  Thin walled pipes of cheaper metals can be used to adequately exchange or transfer heat to air that passes through them.  A growing fad in the construction of homemade air-heating solar panels is to build the collector with empty aluminum beer or soda cans.  The tops and bottoms of the cans are punched or drilled out and the cans are glued together to form a continuous airtight pipes.  The box that holds everything is well insulated (sides and bottom) every interior surface exposed to sunlight is spray painted a dark, sunlight absorbing color – preferably using a high quality, high temp, UV protected paint.  A transparent glazing (of glass, plastic, fiberglass, Mylar, acrylic, polycarbonate, etc.) is tightly sealed over the top of the trap.  A double or even triple layer of glazing is preferable to a single one to reduce the escape of thermal heat.  While beer and soda cans are popular because of their availability and affordability, equally efficient collectors could be made from tin cans (made of metal called tinplate), rain gutter downspouts, old aluminum irrigation pipes, single walled stove pipes or even from bug screen like you’d find on a window.  This site, chosen from many that discuss solar heating with air, suggest that bug screen collectors are on par with soda can collectors and are possibly easier to construct.

In the choice of fan or blower used to push or pull air through the system, it is preferable to circulate a large volume of modestly heated air rather than a small quantity of thoroughly heated air.  Ideally a solar panel can increase the heat of the air passing through it as much as 50 or 60 degrees F.   In this type of collector an optimum airflow rate of 3 CFM per square foot of absorber has been suggested.  In general the larger the solar air panel, the better – small ones are probably not worth considering.  They should be built with quality paints, glazing and other components where possible to resist corrosion and decomposition from sunlight and other climatic elements.

Pointing solar panels


For optimum efficiency any solar panel should face the sun at a perpendicular angle.  The position of the sun changes constantly however throughout the day.  Some institutions or uber rich people might purchase solar trackers which employ servo or stepper motors to keep photovoltaic panels aligned with the sun.  Such ‘trackers’ increase overall efficiency by increasing morning and afternoon light collection.  The rest of us however have to make do with permanently fixed or periodically adjustable panel mounts.  Normally the bases of fixed panels are aligned perpendicular to due (not magnetic) south.  Some owners of grid tied solar photovoltaic panels however are deciding to aim their panels towards the west.


The effectiveness or efficiency of a given solar panel is definitely affected by its proper orientation to the sun but as the sun moves around a lot, solar panels that do not automatically track its movement must seek a positional compromise.  The sun’s apparent altitude in the sky changes throughout the year.  Because of the earth’s motion the sun’s altitude appears to vacillate 23.5 degrees between summer and winter solstices or every 6 months.  Solar panels near the equator can be positioned parallel with the horizon and largely remain efficient by just pointing straight up.  The further a location is from the equator the more vertical a panel’s ideal tilt becomes.  Above the 45th parallel, vertically fixed solar panels mounted to the side of a building can preform admirably in the wintertime.  There is no one perfect tilt angle with which to keep a solar panel perpendicular with the sun’s rays throughout the year.  This fact motivates some people with adjustable panel mounts to periodically climb up on their rooftops with wrench in hand to refine panel tilt.  Others might wish to install a solar panel permanently in the best year round average position and not worry about adjustments.

Older literature for solar panel installation might quote a rule of thumb where 15 degrees are added to latitude for wintertime panel tilt, or 15 degrees of angle are subtracted from latitude to acquire summertime panel tilt.  A more modern set of calculations being mimicked or repeated often around the web, suggest wintertime tilts that are a bit steeper than common to capitalize on midday rather than whole day solar gathering and flatter than normal summertime tilts favoring better whole day rather than midday collection.

-To calculate the best angle or tilt for winter:

(Lat * 0.89) + 24º = ______   (The latitude is multiplied by .89 and added to 24 degrees)

-The best angle for spring and fall:
(Lat * 0.92) – 2.3º = ______

-The best angle for summer:
(Lat * 0.92) – 24.3º = _____

-The best average tilt for year round service:
(Lat * 0.76) + 3.1º = _____

For the purpose of illustration a latitude of 35 degrees North will be chosen.   Locations somewhat close to this latitude include: the Straight of Gibraltar, Tunis Tunisia, Beirut Lebanon, Tehran Iran, Kabul Afghanistan, Seoul Korea, Tokyo Japan – and in America, cities along Interstate 40 or the old Route 66 (Raleigh NC, Memphis Tennessee, Fort Smith AR, Oklahoma City OK, Albuquerque NM, Flagstaff AZ and Bakersfield CA).





A couple of sources for more information:






Metrification for the masses

*  When they weren’t lopping off every other person’s head in France during the revolution which began in 1789, reformers in that country seized the opportunity to make all kinds of other acute changes.  In 1791 for instance, the French Academy of Sciences was instructed to create a new system of measurements and units.  For two centuries now the rest of the world has been brow beaten and cajoled into adopting this sublime system of weights and measures and this process is called metrification.  While most nations have capitulated to the apparent intellectual supremacy or empirical advantages of the metric system, there are still some holdouts in the world.  After two centuries these non-metricated miscreants still drive the more rabid reformatory zealots of metrification, nuts.  Perhaps there are logical reasons in a few instances, not attached to loyalty or laziness, that compel these non-metric holdouts to hang onto some traditional weights and measures.

*  Feeling particularly erudite the reformatory French academics chose to base this metric system on natural values that were unchanging and reproducible, and to use numerical units based on the powers of ten.  Unchanging natural values were hard to coral back in 1791 so the official definitions of all the basic metric units have undergone several changes since then.  The metre is the most fundamental metric unit and from it the other units were originally derived.  American dictionaries, spell checkers and text books won’t even spell the word right.  Technically a “meter” is just a measuring device.  If you’re going to adopt French units you might as well swallow their spelling.   Like the non-metric nautical mile, the metre was originally conceived as being a portion of the earth’s circumference.


*  While the older nautical mile was defined as a minute (1/60th of a degree) of arc along a meridian of the Earth, the new metre was conceptualized as being 1/20,000,000th part of that same meridional distance.  Even before the oblatness of the earth was realized, French surveyors in the 1790’s determined a very fair approximation of what a metre should be.  Since that time the length of the metre has grown 0.2 mm longer.  Today most air and sea navigators still prefer to use non-metric nautical miles rather than kilometers because when using charts (nonlinear, 2-dimensional, mercator projections or maps) it makes life a lot easier.

*  It quickly became self evident that the intended international reproducibility of an accurate metre using the meridional definition was so impractical that a physical artifact had to be produced. In 1799 a platinum bar called the “mètre des Archives” was made and used as a copy reference.  In 1875 “Convention du Mètre” or Metre Convention was instituted to oversee the development of the metric system.  Conceived at the same time CGPM (“Conférence générale des poids et measures” or the General Conference on Weights and Measures) was established to democratically coordinate international participation by holding meetings every 4-6 years.  Broad acceptance of metrification did not really begin to take hold until after WWII and the creation of the European Union. SI or “Système International d’Unités” is today’s official name for the metric system as ordained by the CGPM in 1960.

Confusion and inconstancy

*  There are inconsistencies in the metric system.  The redefinitions of base units have been frequent.  The SI crowd has begrudgingly adopted non decimal units like seconds of time because they can produce no better alternative.  The SI intellectuals have regularly discouraged the use of seemingly compatible units and nomenclature simply because they themselves did not originally create or sanction them.  These same intellectuals have also adopted redundant and unnecessary units and nomenclature when simpler alternatives already existed.  Some unpopular and clumsy sounding SI units are floating around.

*  The currently approved MKS (metre, kilogramme, second) system of units supplanted the older CGS (centimeter, gram, second) system.  It was once simple to think of a gram in terms of the weight one cubic centimeter of water at the melting point of ice.  Although originally a base unit the litre (or liter) is no longer even an official SI unit!  The kilogram originally equaled the mass of a litre (1,000 cubic centimeters) of that same cold, pure water.  Obviously these definitions were not good enough because they no longer apply.  The kilogram is the only metric base unit that hasn’t been redefined in terms of unchanging natural phenomenon.  The authoritative kilogram is an object!  You can’t just produce an accurate kilogram in your laboratory located in Timbuktu.  In a dark vault somewhere in Paris sits a precious SI manufactured artifact.  Today’s official kilogram is a cylinder of 90% platinum and 10% iridium alloy.  Where once the metre was defined as one ten millionth of the distance between the North Pole and the Equator, it was eventually redefined as a multiple of a specific radiation wavelength.  Today’s official redefinition of the metre is as a fractional part of the distance traveled by light in a vacuum.

*  The concept of time and the replacement of legacy time units with suitable modernized counterparts have vexed zealous metric reformers for two centuries.  Years, months, weeks, days, hours, minutes and seconds are not decimally related.  We are gifted with impressive sounding terms like nanoseconds, kiloseconds or milliseconds but the ‘second of time interval’ is adopted by the metric system; it was not an original metric unit.  It was belatedly defined by SI as being: 1/86,400th of a mean solar day.  The metric second was later redefined in terms of astronomical observations.  Even later the second was redefined by the oscillations of a tuning fork, and then again by oscillations of a quartz crystal.  Today the SI second is officially defined as- “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom”.  Who knows how they’ll redefine the second, next year?

*  Whereas a regular non-metric Imperial ton (or short ton) weighs 2,000 lbs, a ‘long ton’ or ‘gross ton’ typically used in shipping cargo weighs 2,240lbs.  A “metric ton” or “tonne” weighs 1,000 kilograms or 2,204.6 lbs.  When appending the prefix “kilo” to ton, things start to get confusing.  In terms of explosive force a kiloton might mean the equivalent of 1,000 metric tons of TNT.  As a unit of weight or mass however a kiloton might mean either 2,000,000 lbs or the same as a kilotonne (2,204,622.6 lbs).  A gigagram would equal a kilotonne but that term is infrequently used.

* The seven current hallowed SI base units are the metre, kilogram, second, ampere, candela, mole and kelvin.  The Kelvin scale is an absolute thermometric scale but units are not referred to as degrees.   We normally call increments of temperature “degrees” because of a decision made way back in 1724 by a German physicist named D.G. Fahrenheit.   The Fahrenheit scale divides the range between the freezing and boiling points of water into 180 equal parts – like the degrees in geometry for half a circle.   D.G. Fahrenheit also invented the glass/mercury thermometer.   About two decades later but still well before the French reforms a Swedish astronomer named A. Celsius, borrowed Fahrenheit’s idea but divided the range by only 100 equal parts.  Originally Celsius’s scale ran backwards or counter intuitive to today’s usage but that situation was reversed after his death in 1744.   From 1744 to 1948 the units of what we now call the Celsius scale were better known as degrees of “centigrade“.   Eventually an Irish/British physicist named W.T. Kelvin comes along with further suggestions for improvement.   The Kelvin scale begins at absolute zero – there is nothing colder.   To make the Kelvin (K) scale fit in with the decimalized Celsius scale, the triple point of water (where gas, liquid, and solid phases of water coexist in thermodynamic equilibrium) had to be defined as exactly 273.16 K.   In other words the base ten loving SI / metric system uses for one of its base units, values derived from a very inconsistent fraction (1 / 273.16 – whose ungainly reciprocal is 0.003661).

Discouraged, clumsy, slang or unneeded

*  Created by a Swedish astrophysicist the tiny increment of length called an angstrom is exactly equivalent to 0.1 nanometre or 0.000,000,0001 metres.  Mention however of the non-Imperial and non-metric but internationally recognized angstrom is officially discouraged by the SI’s International Committee for Weights and Measures.  The small calorie itself was a pre-SI metric unit of energy defined in the 1820’s as the energy needed to raise 1 gram of water, by 1º C.  The dietary calorie (kilogram, large or food calorie) is 1,000 times larger.  The small calorie is obsolete or replaced in preference by the official SI “joule”.  Megbars, kilobars, bars, decibars, centibars and millibars of atmospheric pressure – are not SI units.  It takes 100,000 SI legitimate pascals to equal one bar.  One bar is roughly equivalent to one standard atmospheric pressure at sea level (14.69 psi or 101,325 pascals).  Meteorologists and weather reporters usually prefer to describe changes in air pressure in terms of millibars rather than in the exactly equivalent hectopascals; it just sounds better.  In oceanography during a decent from the surface, drop in metres and increased water pressure in decibars correspond nicely.

*  In an weak attempt to sound sophisticated a scientific journalist might employ the term “kiloannum” to impress his audience rather than use the simpler terms “millennia” or “a thousand years”.  Students might use the jargon “Fermi” rather than the more proper but awkward SI term femtometre to describe infinitesimal nuclear distances.  Attometres, zeptometres and yoctometres are smaller yet.  In astronomy where great distances are expressed one might seldom encounter the SI terms megametre, gigametre, terametre, petametre, exametre, zetametre or yottametre.  The most common vernacular one finds instead are the non-metric light year, parsec and astronomical units.  The “astronomical unit” (which is roughly the mean distance between earth and sun) was thought up in the 1970’s by the IAU (International Astronomical Union – also hosted by France) to patch up shortcomings in regular SI units caused when incorporating general relativity theory.  SI brings us gawky sounding terms like “gray” and “sievert”.  Theses terms were added to the dictionary not because they were necessary, but because they could be branded by SI authorities whereas “rad” and “rem” could not.  A gray is simply 1,000 times bigger than a rad and both units express energy radiated or absorbed.  A sievert is simply 100 times bigger that a rem and both units attempt to adjust radioactive dosages by accounting for type of tissue and type of radiation.

A Short Imperial unit background 

*  Maligned and criticized for still using Imperialistic old fashioned weights and measurements when the rest of the world does not, the American public has shown resistance to metrification.  Primarily a British colony in the beginning, America inherited British imperial units, which were in turn heavily influenced by historic French and even ancient Roman measurements and weights.  The avoirdupois system of weights that Americans favor was actually developed by the French.  The Troy weight system of units of mass still used in many locations around the world for quantifying precious commodities like gold, platinum, silver, gemstones and gunpowder – is also French (believed to be named for the French market town of Troyes in France).  Closely related to Troy weight, the apothecaries’ system of weights favored by physicians, apothecaries and early scientist has roots reaching all over central Europe and the Mediterranean.  The apothecaries’ system of weights was still being used by American physicians and pharmacist into the 1970’s.  After America separated from the British Empire the Americans kept the legacy units pretty much intact while the British did not.  Parliament by meddlesome act or decree and mostly for the purpose of increased taxation, continued to make small changes to certain units of mass and volume.  These changes caused much confusion between American and British (pre-metric) imperial units, which still exist today.


*  Without digressing too far from the subject of metrification: it should be explained that without the discrepancy between wine and beer cask and the British adoption (1824) and eventual retraction of the “stone” unit, that the impetus behind a one world metric system would never have been so great.  The legislated stone unit demanded a redefinition of several standard weights.  Today’s Imperial gallons, bushels and barrels are so screwed up because yesteryear’s hogsheads (drytight cask filled with wine, beer, liquor, whale oil, tobacco, sugar or molasses) were of different sizes.  A hogshead of wine has traditionally held more volume than a hogshead of beer.  In its defense Parliament did try to standardize hogshead volume back in 1423 but this had little effect.  Coopers at different locations made cask as they saw fit and eventually there became an accepted and even official difference in hogshead volumes depending on contents.  A multiplicity of different gallon, bushel and barrel definitions followed suit.  The UK Imperial gallon springs from the ale gallon but the U.S. liquid gallon is based upon the 1707, Queen Anne wine gallon.  Even today this curious distinction between wine and beer continues as the American BATF and Treasury Department require different labeling on the two beverages.  Wine and stronger spirits are labeled only in liters or milliliters while beer containers are labeled only in gallons, quarts, pints or ounces.

* The bushel used to be a measure of volume for grain, agricultural produce or other dry commodities.  Bushels are now most often used as units of mass or weight rather than of volume.  It should be realized that a bushel of each commodity in the mercantile exchange market is completely unique and different.  A bushel of corn weighs 56 lbs. but a bushel of soybeans or wheat weighs 60 lbs.  A bushel of plain barley weighs 48 lbs. but a bushel of malted barley weighs only 34 lbs.  A bushel of oats in the U.S. weighs 32 lbs. but across the border in Canada, it weighs 34 lbs.  Okra weighs 26 lbs. per bushel and Kentucky Blue grass seed only 14 lbs.  Many other commodities exist, whose specific values fluctuate according to the jurisdiction (country to country; state to state).  Pork bellies (the valuable bacon only) are traded by weight (one unit equals 20 tons of frozen, trimmed bellies).  The rest of the hog’s carcass in a commodities market is expressed as Lean Hog futures.  Refined oil might be shipped in 55 gallon drums (first created in WWII to ship liquids) but crude oil is measured and traded on the standard 42 U.S. gallon, historic common wooden barrels of yesteryear.  Barrels of other commodities often contain a volume of 31.5 U.S. gallons.


*  Where the Imperial system does not fail and probably needed no replacement is in its units of length, distance and area.  Imperial units of length were intuitively developed over the ages.  Metric units of length might be more easily abstracted numerically in calculations for pencil pushing types, but these are not nearly so instinctive for everyday usage.  Engineers and architects seldom have to build what they design; that labor falls to builders, millwrights, manufacturers, fabricators and others who work with real materials on a daily basis.

*  Consider the Imperial ruler or tape measure and its metric counterpart.  Working with fractions a fairly accurate Imperial ruler could be reconstructed by almost anyone given an empty room, a pencil, a pair of scissors and a strip or two of unmarked paper exactly one yard in length.  Feet, inches, half-inches, quarter-inches, eight-inches and perhaps sixteenth-inches could be adequately marked upon a blank yard long strip of paper.  In contrast it would quickly be realized that an adequate depiction of centimeters and millimeters could not be intuitively described upon a blank, metre long strip of paper.  There can be another eloquence in fractions.  Builders and fabricators familiar with feet and inches can often perform the type of mental arithmetic that would send their decimal loving metric counterparts scurrying for the nearest calculator or pencil.


*  Americans find many customary units desirable and appropriate.  Non SI unit terms like liquid ounces, shots, gills, noggins, fifths, teaspoons, cups, pints, quarts, gallons, barrels, board feet, pecks, bushels, BTUs, milibars, carats, cycles per second, pounds, ounces, troy ounces, drams, tons, caliber, mils, standard gauge, rods, chains, inches, feet, yards, furlongs, miles, nautical miles, fathoms, knots, picas, angstroms, light years, parsecs, acres, townships and sections remain in the American vernacular.  The sluggish progress in thorough American metrification has been excused as the result of ignorance, laziness or complacency by the public.  That may be.  Remember though that American schools have versed students in the metric system for the last 50 years or more.  We can use SI whenever we want to.  We’ve experienced strong-arm attempts to have SI foisted upon us as in the Metric Conversion Act and the Fair Packaging & Label Act.


*  Never the perpetrators of a bloody social revolution like the Russian one, or France’s where mobs decapitated anyone who thought differently or had money, Americans might simply resist metrification because they resist anything totalitarian by nature.  That’s what metrification is; a totalitarian ideal.  It request the wanton destruction, scourge, eradication and abandonment of any other competing form of weights and measurement.  So who’s the real bigot; the unassuming Japanese or American builder who finally learns how to use a conventional tape measure well and sees no reason to change it or some frustrated high school chemistry teacher who wants a dumb-ed down tape measure and for all other alternatives in the world to be immediately destroyed?

*  An otherwise thoroughly metricated country, Japan’s carpenters, builders and realtors still favor their shakkanho length measurements which were acquired from ancient China.  The shaku is the base unit and was originally the length from the thumb to the extended middle finger (about 18 cm or 7 in).  That length grew to approximately 30.3 cm, or 11.93 inches (Kanejaku or “carpenter’s square” shaku).  Floor space in a Japanese house is usually described in terms of a number of single traditional straw tatami mats or a square of two tatami mats (tsubo).  The koku, defined as 10 cubic shaku is still used in Japanese lumber trade.

*  In order to prevent fines and prosecution that other non-SI compliant merchants in Europe have been hit with, the British and Irish have seen fit to pass legislation which protects their traditional non-SI whiskey and beer rations (like gills, pints and Imperial gallons).  When it came to alcohol, it seems as if the rigors of metrification hit a little too close to home.   The UK decimalized its currency back in 1971 and it is the only EU member to have retained its own monetary system- which is also the oldest monetary system still in use.   Few things are as frustrating for a foreigner to comprehend as the meaning of old English tower pounds, sterling pounds, gold sovereigns, guineas, quid, fivers, coppers, crowns, shillings, sixpence, halfpennys, farthings and tuppence.  If there were space enough left in this post these could be explained.  Some of the old legacy Imperial units mentioned previously have very interesting backgrounds as well but explanations will have to wait.  The topic of this post has been the triumphant march of metrification and the liberating, joyful piece of mind and harmony it will bring to the world once its total acceptance is finally complete. —————————–

Captured from an e-mail years ago: somewhere an anonymous wit promotes these additional units – lest they become forgotten in the march of time also…

* 1 millionth of a mouthwash = 1 microscope

* Ratio of an igloo’s circumference to its diameter = Eskimo Pi

* 2,000 pounds of Chinese soup = Won ton

* Time between slipping on a peel and smacking the pavement = 1 bananosecond

* Weight an evangelist carries with God = 1 billigram

* Time it takes to sail 220 yards at 1 nautical mile per hour = Knotfurlong

* 16.5 feet in the Twilight Zone = 1 Rod Serling

* Half of a large intestine = 1 semicolon

* 1,000,000 aches = 1 megahurtz * Basic unit of laryngitis = 1 hoarsepower

* Shortest distance between two jokes = 1 straight line

* 453.6 graham crackers = 1 pound cake

* 1 million-million microphones = 1 megaphone * 2 million bicycles = 2 megacycles

* 365.25 days = 1 unicycle

* 2000 mockingbirds = 2 kilomockingbirds

* 52 cards = 1 decacards

* 1 kilogram of falling figs = 1 FigNewton

* 1,000 milliliters of wet socks = 1 literhosen

* 1 millionth of a fish = 1 microfiche

* 1 trillion pins = 1 terrapin

* 10 rations = 1 decoration

* 100 rations = 1 C-ration

* 2 monograms = 1 diagram

* 4 nickels = 1 paradigms

* 2.4 statute miles of intravenous surgical tubing at Yale University Hospital = 1 IV League and…

* 100 Senators = Not 1 good decision


Yeast & Fermentation

This post endeavors to briefly illuminate a particularly minuscule organism that since the dawn of mankind has exerted considerable influence over the human condition.  Found in the dirt, air and water some yeast also subside naturally – inside all vegetation, animals and humans.  All fungi are parasitic or saprophytic and cannot manufacture their own food.  Since yeast are fungi and all fungi are heterotrophs that live on preformed organic matter some yeasts have been using mankind for far longer than he has been using them.   To state that mankind has domesticated yeast for thousands of years is probably an erroneous statement.  Whether he knew it or not however mankind has been exploiting these individually invisible microorganisms for his own benefit for perhaps ten millennia or more.  The historic relationship between brewing and baking is more intertwined than most readers may appreciate.  Today yeasts are also used to produce food additives, vitamins, pharmaceuticals, biofuels, lubricants and detergents. The more one learns, the more his appreciation grows for these seemingly simple little life forms.  It doesn’t take a degree in organic chemistry or molecular biology to put these little critters to productive work.

Yeasts are more evolutionary advanced microorganisms than say prokaryotic organisms like viruses and bacteria.  Prokaryotes don’t have a nucleus.   Higher life forms like onions, grasshoppers, humans and yeasts are eukaryotes which means their cells store genetic information within a nucleus.  Simpler and more basic than human cells and easier to work with, bread yeast (Saccharomyces cerevisiae) was the first eukaryotic organism to have its genome be fully sequenced.  A genome is the hereditary information stored in an organism – the entire DNA/RNA sequence for each chromosome.

The S. cerevisiae yeast genome possesses something like 12 million base pairs and 6,000 genes compared to a more complex human genome with 3 billion base pairs and 20,000 -25,000 protein coding genes.  Although sequencing has become easier in recent times, 18 years ago the thorough examination of Saccharomyces cerevisiae’s (beer yeast) genome was no simple task.  That project inspected millions of chromosomal DNA arrangements, involved the efforts of over 100 laboratories and was finally completed in 1996 after seven years of hard work.

* The 6th eukaryotic genome sequenced was also a yeast (Schizosaccharomyces pombe – in 2002) and it contained 13.8 million base pairs. 

The mentioning of this 1st accomplished genome sequencing is significant because it was to cause an upheaval in the current accepted classification of yeast species.  There are probably a great number of yet undiscovered yeast species in the wild but presently only a small percentage (between 600 and 1,500 species depending upon your source of information) are currently cataloged.  One of the more important fungi in the history of the world, the classification of Saccharomyces cerevisiae species is very much in a malleable state of flux.  You may read about the many types of bread yeast, or the hundreds of “varieties” of beer yeast or the hundreds of “strains” of wine yeast – but for the most part these share the same DNA and therefore must be considered the same species.   With beer and especially with wines the choice of yeast (strain or variety and species where applicable) can profoundly influence the outcome of the beverage’s flavor profile.

Bad fungus

“Almost all yeasts are potential pathogens” but none of the Saccharomyces species or close relations have been associated with pathogenicity toward humans.   “Candida and Aspergillus species are the most common causes of invasive fungal infection in debilitated individuals”, with 6 species (Candida: albicans, glabrata, krusei, neoformans, parapsilosis & tropicalis) accounting for about 90% of those infections.

Other multi-cellular (non-yeast) fungi affect humanity in various ways: Trichophyton rubrum and / or Epidermophyton floccosum bring us athlete’s foot, ringworm, jock itch and nail infection.  A member of genius Penicillium (with over 300 species) brings us a life saving antibiotic which kills certain types of bacteria in the body.   Claviceps purpurea or “rye ergot fungus” – if not immediately lethal or debilitating, brought us a mind altering alkaloid similar to LSD.  One of the more important negative influences fungi exercise upon us is in the capacity to destroy food crops.  

Domestication ?

A defining characteristic of domestication is artificial selection by humans.  Domestication means altering the behaviors, size and genetics of animals and plants.  These things were not done to yeast in antiquity.   Isolation of certain beneficial yeast strains was only beginning some 200 years ago, in breweries.  Only recently (by 1938) was one scientist was able to cross two separate strains of yeast and come up with a new one.  Although by the 1970’s scientist were beginning to mutate and hybridize yeast, it may be with the more recent attempts to engineer yeast to convert xylose (a wood sugar) into cellulosic ethanol that some additional yeast species can confidently be described as domesticated.  Even then “engineering” is a strong word.  Yeast mutate all the time without human help.  Scientist didn’t create a new fungus but started with examples that already decomposed dead trees or other cellulose containing plant material.  By attenuating the selection process for yeast with numerous cellulase enzymes, scientists hope to produce economical automotive fuel from sawdust and other normally wasted biomass.  The quest for an ideal yeast and bacterial biomass consuming combination is still ongoing.  This particular process defines artificial selection, not gene modification.

Right now, this very moment anyone can capture wild yeast from vegetable matter or from the very air to make bread or to ferment beer or wine.  In antiquity the women folk who cooked and then later bakers, brewers and tavern keepers likely kept a portion of a previous dough or barm yeast culture as a ‘starter’ simply to hasten the development of the next batch.  While this process might support claims of artificial yeast selection throughout history, one might also be reminded that sanitation during those bygone days was questionable and that exposure to wild yeast and bacteria was probably persistent.  It has always been easy to just whip up a new yeast culture from scratch, as will be explained shortly and as revealed in several recipes from a 120 year old cookbook.


Bread, Beer & Wine

The discovery or invention of wine, beer and bread were unavoidable and early man deserves no special intellectual credit for the achievement because omnipresent yeasts and bacteria did all the work.  Consider the cavewomen that picked a bountiful harvest of wild grapes and then carted these back home in animal skins or clay-lined baskets to be consumed later.  In a few days time wild yeast and bacteria would begin breaking down the fructose and glucose from juice released from crushed grapes at the bottom of any impermeable container.  The oldest available archeological evidence of a fermented beverage comes from 9,000 year old mead (honey wine) tailings found in northern China.  Here probably someone had originally, unknowingly enabled the enzymes from yeast to work by adding water to get all the sticky honey out of a container.  Likewise the inescapable discovery of bread and beer are no mystery.  Raw fresh grain is soft and easily chewable foodstuff.  Dried grain is next to impossible to chew so ancient man was soon mashing it between two rocks to make the powder called flour.  Dry flour is not very tasty so the next obvious experiment would be to add water and later perhaps to cook the gruel in a fire – eventually inventing bread.  Obviously the first breads were probably flat breads.  The proper leavening of bread actually requires several hours of rest for fermentation to create carbon dioxide bubbles which get trapped in gluten to make bread rise.  Had someone boiled a wet soup from the flour instead and then abandoned it because it wasn’t very good, it would have eventually turned into a beer in a few days.  Perhaps the first beer or ale resulted simply from someone’s bread falling into a pot of water.  Regardless, our encounter with fermentation and the invention of both bread and alcoholic beverage was inevitable.

Briefly, Saccharomyces cerevisiae (or sugar fungus) is typical of many yeast species but is a particularly successful species because it can live in many different environments.  Few of the other 64,000 or so members in the Ascomycota fungal phylum can reproduce both sexually and asexually while also being able to break down their food through both aerobic respiration and anaerobic fermentation – all at the same time.

budding yeast

 Under favorable conditions most, but not all yeasts reproduce asexually by budding where one cell splits into two.  On average a particular yeast cell can divide between 12 and 15 times.  In a well controlled ferment aerobic (with oxygen) respiration allows “sugar fungus” yeast cells to reproduce or double about every 90 minutes.  During respiration carbohydrates donate electrons, allowing cell growth, CO2 and water (H2O) production.   During anaerobic fermentation carbohydrates undergo oxidization while ethanol and CO2 are produced.  One yeast cell can ferment approximately its own weight in glucose per hour.  Favorable ferment conditions in this context imply moisture, mineral nutrition, a neutral or slightly acidic pH environment and a narrow temperature range of 50° F to 99° F.  Most yeast cells are killed at temperatures above 122°F.

* (No yeast yet known is completely anaerobic nor is fermentation necessarily restricted to an anaerobic environment).

Under harsh or unfavorable conditions yeasts like S. cerevisiae can become dormant and reproduce sexually by producing spores.  Spores can survive for hundreds of years, perhaps indefinitely, and like many other infinitesimal items can remain airborne for years before coming back into contact with the surface of the earth.  Anyone questioning this assertion should have a look at Lyall Watson’s book, titled “Heaven’s Breath: A Natural History of the Wind”.


A typical yeast cell measures about 3–4 µm (microns or millionth of a meter) in diameter.   Dry packaged yeast as imaged above can survive a long time when refrigerated.  The 3 large bakers yeast packages pictured at the bottom are labeled as containing 21 grams of yeast each.  The 3 brewers yeast packages on top are labeled 5 grams.   Compressed yeast which would contain less yeasts per gram because less water has been removed, is estimated to contain between 20 and 30 billion living organisms – per gram.  The physical volume of that gram would be about the size of a pencil eraser.


In general, bacteria are to be avoided during normal food and beverage production, but as usual there are exceptions.   Many of the approximate 125 species of lactobacillus bacteria are closely associated with food spoilage.  Without the assistance of beneficial bacteria (several of which are lactobacillus members) however we would have no vinegar, chocolate, cider, cheese, kim-chi, pickles, sauerkraut, sourdough bread or yogurt.  Bacteria can drive fermentation by themselves.  More preferably, certain beneficial bacteria can assist yeasts in the fermentation reaction for breads, beers or wines and are sometimes deliberately used to do so.


In baking or brewing it is the enzymes that yeasts or bacteria possess or produce which catalyze chemical reactions and drive fermentation.  A mixture of enzymes might be needed to successfully break down complex longer chained carbohydrates, before either bread leavening or ethanol production is achieved.  In alcoholic fermented beverages, enzymes might be acquired from sources beyond yeast and bacteria, such as from human saliva where for a thousand years descendants of the Incas have chewed maize and spit into common vats to produce the wine called “Chicha”.  The rice wine “Sake” is made with the help of enzymes from a (non yeast) fungus mold named Aspergillus oryzae.  The enzymes used to create the Mongolian horse milk wine known as “Ayrag” or “Kumis” came from the lining of a bag sewn from a cow’s stomach.   There are far too many types of enzymes to list here but the names of some important ones often end in the suffix “ase” (as in: lactase, saccharase, maltase, alpha amylase or diastase, zymase or invertase and alpha-galactosidase).    

Sugar or starch

To briefly outline and oversimplify a topic that deserves more attention: there are many names for, and many types of, starches and sugars and enzymes needed to break them down.  There are simple sugars, complex sugars and very complex sugars or conversely one could say: ‘there are: monosaccharides, disaccharides, oligosaccharides, and polysaccharides’.  Glucose (or dextrose), fructose (or levulose), galactose, and ribose are monosaccharides and examples of the simplest sugar molecules.  Two monosaccharides are found combined in a disaccharide – as in sucrose, lactose or maltose.  Table sugar is almost pure sucrose.  An enzyme like zymase (also called invertase or a dozen other names) is needed to split sucrose into two mono or simple sugar molecules (glucose and fructose) before fermentation of ethanol and CO2 can commence.  Oligosaccharides generally contain anywhere between 3 and 9 monosaccharides.  Polysaccharides are even longer, linear or branched polymeric carbohydrates and may sometimes contain thousands of monosaccharides.  Starch and cellulose are examples of polysaccharides.

Sugarcane was originally indigenous to Southeast Asia and was slowly spread by man to surrounding regions.  In ancient times sugar was exported and traded like a valuable spice or medicine – not as a food commodity.  There was some spread of sugarcane cultivation in the medieval Muslim world but otherwise cultivation did not blossom until the 16th century when colonials reaped their first sugar harvest in the New World (Brazil and the West Indies or Caribbean Basin).  Sugar from sugar beets was never realized until a German chemist noticed that the beet roots contained sucrose.  The first refined beet sugar commodity appeared around 1802.



Leaven” is the ancient equivalent term for yeast and it caused bread to rise.  Leaven was mentioned in the Bible when Moses led the Israelites out of Egypt, and where they all left in a hurry without waiting for their bread to rise.  Flat, unleavened, unremarkable bread is served during Passover, which is not a Jewish feast or celebration but a remembrance of deliverance, simplicity, haste, and powerlessness.  “Yeast” is a younger word with roots from Indo-European and Old English words meaning surface froth, bubble, foam and boil.  In times past and probably for many centuries, housewives and or cooks usually made both bread and beer on a frequent basis, from a leaven-yeast starter that they maintained in the kitchen.  In both Medieval Europe and colonial North America many households also maintained a constant supply of “small beer” on hand for servants and children or for general consumption.  Small beer had low alcohol content but some taste and since it was pasteurized it was usually much safer to drink than the local water.  Two centuries ago some children drank small beer with breakfast just like today’s children might drink orange juice.

Almost all bread before the 1840s was probably a form of sourdough bread.  Without the help of either bacteria or refined sucrose, S. cerevisiae yeast alone cannot properly break down the starches (polysaccharides or carbohydrates) in flour, work its fermentation or cause bread to rise.  In the early 1800s, for the fist time, collective bakers began making sweet breads (as opposed to sour) by using bottled yeast skimmed off and collected from ale (beer) vats.  This renaissance in baking quickly spread outwards from Vienna, Austria.  In general, bakers started buying top-fermenting beer yeast from brewers.  Initially the yeasts were collected by skimming barm or krausen off the top of a beer vat and putting it into bottles.  In about this same time frame another renaissance or revolution was occurring in the beer world.   German brewers were learning to make lagers, which employed different (bottom dwelling) yeast and much cooler and longer fermentation periods.  At the time lagers were a taste sensation and considered a great improvement over the heaver ales.  With many brewers ‘changing horses in mid stream’ to use different yeast and processes in order to jump on the lager bandwagon, bakers in Vienna and elsewhere were left without convenient sources of sweet yeast.  To fill that void ‘press yeast’ was developed.  The forerunner of modern baker’s yeast, press yeast was first skimmed from the top of a dedicated grain mash and washed and drained carefully before being squeezed in a hydraulic press.  Modern baker’s yeast have pretty much been selected for optimum carbon dioxide production.  Such yeast would still make good ale.  Bread dough makes alcohol while fermenting but that escapes when it is baked.

* The grains corn and rice have no gluten.  To make breads with these grains rise, flour with gluten must be added. 

* “Quick breads” like biscuits, pancakes, bannock, scones, sopapias and cornbread are made with “self rising flour” or regular flour with the help of a baking power.  Self rising flour merely contains its own baking powder.  Baking powder is a mixture of soda, acid salts and starch (which helps keep the other two ingredients inactive).  Baking powder is basically a little bomb, a little electrochemical reaction for making gas bubbles; waiting only to be triggered by the addition of liquid.  


Sourdough bread

Sourdough is a vague term.  There are many ways to create a sourdough starter.  While the name implies a sour taste due to contribution of bacteria and / or wild yeast, some sourdoughs taste little different than normal commercial sweet bread.  Some sourdough starter recipes actually call for baker’s yeast to be used while others might begin with pineapple juice, potatoes or even yeast captured in an opened can of beer left on the kitchen counter top for about a week.  A characteristic practice of sourdough bread making is that a portion of the ‘sponge’ is to be retained after each dough batch and is stored in a cool place to be used as the next starter.  ‘Sour mash’ whiskey has the same connotation – part of the original yeast and enzyme culture is retained and used in the next batch – maintaining consistency of product.   In brewing “re-pitching” the yeast is similar to using a sourdough starter; a portion of the live yeast from the bottom or top of a wine must or grain mash is saved to be reused again.

In the 1840s as the first Bavarian lager technology was reaching America, gold miners were about to congregate in the California Gold Rush.  San Francisco is a modern bastion of sourdough bread patronage with some restaurants or bakeries claiming to have maintained the same starters since the Gold Rush days.  One species of lactic acid bacteria found in some sourdough is actually named after the city: Lactobacillus sanfranciscensis.  Also these starters might include species of yeast (like Saccharomyces exiguous or Candida milleri) that can leaven bread by working on polysaccharides instead of simple sucrose.

Homemade yeast

While fresh compressed yeast was becoming common in the urban food markets of Europe and America by the 1870’s, many individuals (especially those in remoter areas) simply made their own yeast.  The “White House Cook Book” was an authoritative publication ((c)1887 and before) used by ambitious housewives across the country.  The book gives several recipes for starting a yeast culture, including the use of milk or salt and even drying the yeast into cakes for later use.  One of the book’s recipes for yeast is simply titled “Unrivaled Yeast” and it resembles the following (actual recipe is on p.242):

– boil 2 oz. of hops in 4 qts. of water for 30 minutes, strain and let cool

– mix this water in large bowl with 1 qt flour, ½ cup salt and ½ cup brown sugar –let stand for 3 days

– mix this with 6 boiled and mashed potatoes – let stand for another day, stirring frequently.  

– ready to use or to be stored in bottles for future use (good if kept cool for about 2 months).

Obviously the yeasts native to the potatoes were killed by boiling, so yeasts from the atmosphere and perhaps flour as well were the ones captured.  Sanitation and sterilization of utensils was and still is important to limit the procreation of undesirable bacteria.   Hops (flowers of the Humulus lupulus plant) are frequently mentioned in these older recipes because hops which were also used as herbal medicine, act as an antiseptic \ antibiotic preservative by inhibiting bacterial growth but not beneficial yeast growth.

* The Reinheitsgebot or Bavarian Purity Law of 1487 – specified the use of only water, barley and hops – for the brewing of beer.   The contribution of yeast was not appreciated but the antibacterial benefits and virtuous bitter flavor components of hops were.  Evidence suggests that hops were being used in Bavarian beer as early as 736 in an abbey outside Munich.  The Reinheitsgebot also had the effect of discouraging competing imported Belgian beers which preferred to use gruit and of preserving the wheat harvest for those needing to bake bread.  

 * Actually there may be much more to the story of the Reinheitsgebot.  Conveniently coinciding with beer brewing politics during this period was the growing religious dichotomy between the Catholic Church and theologians like Martin Luther.  The arrival of the Gutenberg printing press in 1439 helped spread Protestant literature, leading to the Protestant Reformation and ultimately the widespread replacement of gruit flavoring, with hops.   Secretive, controlled and monopolized gruits would enhance sexual drive while Protestant approved hops did the exact opposite.  The motivations (for hop’s substitution of other herbs) were religious and mercantile.

There are many, many other interesting facts to discuss about yeast, enzymes and bacteria in regards to fermentation but this post has to draw a conclusion or come to an ending somewhere.  No more time will be taken to examine yeast killing sulfides in wine, the alcohol tolerance of different yeasts, turbo yeast or how Champagne is created by secondary fermentation.  Somehow it seems that yeasts have used us just as much as we have used them.  We have changed their nature little – if at all.  For the small percentage of yeast species we have identified, we are on the verge of understanding the true nature of just a few.