For about three million years prehistoric man apparently had no means to initiate a fire.  Once acquired from nature, maintaining a continuous fire then likely became a critically important function for someone – in a family, tribe or group.   It is assumed that humankind didn’t acquire reliable fire making skills until eons later, somewhere around 7,000 BC.  Then, as mentioned in a previous post it took humans about three thousand years to advance from copper – to iron – smelting temperatures; an increase of only about 500° Celsius.

*  Advancements in the control of fire’s concentration and the increase of its heat were critical prerequisites to tool improvement and therefore also crucial to our cultural evolution.   Anthropologist and archaeologist then are categorizing mankind’s cultural progressions into partitions  (i.e. Old Stone age, Neolithic age, Bronze age, Iron age, Dark age, etc.)  indirectly  based upon his ability to control heat.  

The process of initiating a fire would remain difficult and inconvenient for another 8,800 years.   Before the invention of the phosphorous friction match two short centuries ago, cultured society’s best fire starting technologies were scarcely improvements upon or less tedious to perform than those used by wild aborigine contemporaries or enlightened prehistoric cave men.   A review of some of these archaic techniques follows.

Starting from scratch 

Early on, the most popular fire starting method seems to have been the fire drill.  In this friction method the drill is a shaft of wood spun by hand pressure.  Dissimilar woods are primarily chosen, usually a harder wood for the shaft and softer one for the plank.  Continued friction causes powder or dust to separate from the softer wood and become heated.


In the image above a base plank is specially prepared with a notch to allow the tiny precious glowing ember of hot dust to fall out onto some tinder.  The person blows on the ember, creates a flame, and then adds more tinder and kindling – to make the flame grow.  The whole process appears easy enough, but in truth can be a significant chore.  With practice however an individual can make the process work in less than 60 seconds.  It’s a question experience, technique and good tinder.  The world’s record for getting a suitably hot coal with a hand-drill is 4.5 seconds.

Another ancient friction fire starting method is the wood plough.  Popular in Polynesian cultures, this method also requires careful selection of woods.  The base is usually a small tree trunk or staff of soft wood with a grove worn into it.  While the base is held immobile a plow of smaller diameter hardwood is draw back and fourth in the grove.  As with the fire drill, friction creates dust which turns into a hot ember, which is then dropped into some tinder.  Like the fire drill, the fire plow requires experience and steadily applied pressure to work.


A method of primitive fire starting popular in Indonesia is the “bamboo fire saw”.  A short section of dry bamboo is split in half.  On one half-section a small notch is started with a knife.  The other half-section that is to be used for the sawing can be whittled down in size and one edge should be sharp.  In method “A” below, the saw is below the tinder and is held still by the body pressing it against a firm object.  The ember is caught in the tinder above.  In method “B” below, the position of the saw is reversed and it is held in the hand.  The ember falls onto the tinder below.


A popular New Guinea variation of the bamboo fire saw utilizes a thin strand of bamboo or tough local vine as a rope saw.  The bamboo plank and tinder beneath are stood upon as the friction is applied.


The bow drill is actually an adaptation of the fire drill / hand drill method.  This method is also ancient.  Egyptians were using this method while building their pyramids.  Most other civilizations that used the bow for archery probably learned to use it as a fire starting tool also.  This was the favored method of some American Indian tribes although they did not forget about the hand drill.  An archery bow will work for fire starting, but a bow for such a chore does not need to be so big.  In fact a small branch about 2 feet long, with a small curvature, is optimum.


The same kind of bottom plank or fireboard  incorporated by the fire drill method is used.  Other acceptable fireboards are a pair of branches tied together, a branch that has a season split or a chunk of  dead and dried tree fungus


The shaft or spindle can be shorter with a bow drill (usually somewhere between 10” and 5”).  A thin spindle of about 5/8” to ½” diameter is probably best.  Initially pointed for starting a new hole, the spindle thereafter is kept round.  A socket of wood or bone knuckle is held in the hand that applies downward pressure in the spindle.  Optimally the spindle is braced under the shin, below the knee, where it can be held steady and secure.  Since the friction and heat is wanted only at the fireboard end of the shaft, the other end which held by hand will benefit from a socket with lubrication or a hard insert to reduce friction.  If the socket is wood then a metal bottle cap or a small concave stone insert will reduce friction while allowing more pressure to be gradually applied.  The types of wood used for spindle and fireboard make a big difference.   Given the choices at a random location – only experimentation will tell.  Yucca and Elm rate highly but Maple and Pine do not.


3 bow drill sockets

The Egyptian bow drill used several millennia ago was often a tiny affair.  One of its attributes was the fact that the spindle was attached to the string.  Extra coils of string were wrapped around the spindle (wrapped both directions from center).  This allowed better traction and control over the spindle.  The spindle was fastened either by the string passing through a hole in the spindle or by the tying of a simple clove hitch knot.  Someone skilled in its use can start a flame within 25 seconds, using a bow drill.  The world record in the late 1930’s for getting a flame with a bow drill was 7.5 seconds …


Another old variation of the bow drill is the pump drill.  The pump drill would be only slightly more complicated to build than the bow drill.  Used correctly, the spindle can be kept in continual motion by the centrifugal force or inertia of the flywheel and rhythmic motion of the pumping hand.  Useful friction would only be exerted on the downward stroke however.


Before the 19th century the most advanced means of initiating a fire was with a kit called a tinderboxThe tinderbox, typically made of metal usually contained a sharp piece of flint (rock), a hard piece of steel and tinder.   Tinder simply means some type of very combustible material.   Tinder can be anything from char cloth (linen or cotton cloth that has been pre-burned in a low oxygen environment) to spider webs, various plant fibers, termite dust, grass, pitch wood, bird’s nest, down, fungus, Spanish tree moss, paper from wasp or hornet’s nest, oakum, cotton balls dipped in Vaseline or lint taken from a clothes dryer.   Tinder needs to be dry, fibrous, fluffy, and highly ignitable.   Many materials can be masticated and crushed to make them more fibrous.   The fancier tinderboxes of the pre-match era often had a c-shaped or horse shoe shaped piece of metal to hold in one hand, while the flint rock was held in the other.   When struck together friction ignites tiny shavings of metal, not rock.


In all these afore mentioned fire starting techniques the precious spark or glowing ember must be captured by the tinder and skillfully assisted with extra oxygen to create a flame.

A more modern equivalent to the old flint and steel combination is the Ferrocerium rod.  Typically found in cigarette lighter “flints”, wind-up toys that spark or in a welder’s striker; ferrocerium is a man made mix of cerium and iron.  This material is usually pushed by a spring, against an abrasive piece of moving steel to create sparks.  As with the old flint and steel method, friction ignites tiny shavings of metal.  In this situation however the iron in ferrocerium burns, not the harder steel.  Cerium’s low temperature pyrophoricity is responsible for the easy sparking.  A modern survival kit might contain a single rod of ferrocerium as a fire starter.  It’s resilient to damage by water and reliable.  Better yet, a survival kit might include a magnesium fire starter.  Shavings of magnesium are scraped off into a little pile (already atop paper or other tinder).  The sparks are then scratched off the attached ferro-cerium rod, onto the magnesium flakes, which should then burst into flame.


* A little match history

It took humankind at least nine thousand years of trial & error, to progress from hand or bow drills, to fire starters as instantaneous as kitchen matches & butane cigarette lighters.  Yet we modern people causally dismiss matches and lighters as being very simple devices.

In 1669 Hamburg Germany, an alchemist was trying to convert some of “life’s essence” into gold.  He took some of his own urine, let it rot, then boiled it down to a paste, then cooked it some more—letting the vapors travel through water.  What he got was a waxy substance that glowed in the dark (ammonia sodium hydrogen phosphate).  Two years later the Irishman physicist Robert Boyle (Boyle’s Law) rubs this newfound phosphorus against some sulfur and creates a flame.  Boyle did not exploit his opportunity to invent the friction match.  Mankind was to wait another 1.5 centuries before finding an easier way to start a fire.

Along came an English apothecary and chemist in 1827.  He invents a functional but impractical match called the “Prometheus”.  This was a wood splinter with a potassium chlorate head placed next to a tiny glass bead of sulfuric acid, then rolled in paper.  A person used tweezers or a bite with the teeth to break the glass and set off the flame.  More importantly, our apothecary later sticks a mixture of starch, gum Arabic, antimony sulfide and potassium chlorate onto a stick and lets it dry.  This invention he calls a “Congreve”, named after an officer who had introduced War Rockets to the British arsenal.  Large rockets (16’ long) that could rise 9,000 feet in the sky and which sprouted great flames (some were used against Fort McHenry- Baltimore harbor, in the War of 1812).  Our English chemist and friction match inventor did sell a few matches but he did not get rich.  Another Englishman exploited the commercial market for these matches, and renamed them “Lucifer s”.  They became very popular with smokers, but stank.   In 1830 a French chemist created a match that did not stink, using white phosphorous which was highly reactive and toxic.

During the next 50 years large match factories were created that mostly exploited the cheap labor of children, young girls and women.  “Phossy Jaw” was a famous ailment caused by inhalation of white and yellow phosphorus vapors in the match factories and often led to death.  The English “Suffragette Movement” and a defining moment in trade union history started with women striking against conditions and hazards of the match factories, in the Bow district of London.

In 1855 a Swede created the first safety match, using less dangerous red phosphorus and ignitable only on the box.  In 1889 the first matchbook matches were invented and were called “flexibles”.

By 1910 the Diamond Match Co. patented the first nonpoisonous match, using sequisulfide of phosphorous.  Asked by President Taft to release their patent for the good of mankind, Diamond Match did in 1911.   A century later the once common strike anywhere type of kitchen match has become rare in the US today.


More modern “primitive” fire starting methods

Everyone probably knows that by using a magnifying glass, energy from sunlight can be concentrated enough to start a fire.  Without a quality lens and the cooperation of good sunlight thought, even this task is easier said than done.

A ray of light passing through the center of a thin lens keeps its original direction.  A ray that strikes anywhere else is bent. The amount a light ray is bent increases with its distance from the center of the lens.  A magnifying glass is actually a double-convex lens.  It can gather the energy from a broad area and concentrate it into a smaller area.  The focal point or hot spot is where parallel light rays converge (cross) along the principle axis of the lens.


Dust and scratches or imperfections of the lens will diffract the light and lessen the practicality of this fire starting method.   Plastic or toy magnifying lenses generally diffract, diffuse, disperse or scatter so much light that they are useless for starting flames.  One is not likely to have or to run across a magnifying lens in an emergency situation.  Other types of lenses might be available however, and might be drafted into making an improvised double-convex lens.   Glass lenses can be salvaged from eyewear, cameras, binoculars, and telescopes.  The first two lens shape a & b in the following image, bends light in a way that’s unbeneficial to fire starting.  A pair of lenses with the shape a, back to back however might adequately mimic a double-convex lens.  The last two lens shapes e & f are the type normally found in eyeglasses.


A drop of water placed on the back or inside of lens type e or f, will produce a temporary double-convex shape.  The surface tension of the water droplet should produce the opposing convex surface.  It takes a very steady hand to find the optimum focal point and to hold eyewear and water still; long enough to initiate a flame.  Not all eyewear is created equal so the vision prescription will actually be a factor in any success.


Another clever idea for making a fire in an emergency involves the simple clear plastic sandwich “baggie.  One fills the baggie with water, and then twists the contents into a bubble or sphere.  With this makeshift double convex lens, one again needs to focus the hot point upon the tinder and hold it steady, long enough for the sun to do its work.  A clear chunk of ice might also concentrate solar energy in one spot, long enough to ignite some tinder.   The notion however, of starting a fire with anything less than a very good lens is nifty but frequently impractical.   Keeping the water from leaking can be difficult as is holding it still for any duration.   Even when sunlight is strong and direct an average lens will diffuse the light so much that even the driest tinder will not ignite.  This water baggie idea is more fanciful than realistic.


A parabolic mirror or highly polished parabolic surface can also capture heat from the sun.   If held at the correct angle to the sun, the surface will concentrate the light into one small spot along the edge of the parabola.  For example a person could polish the concave bottom of a beer or soda can to a high sheen, using some steel wool.  If steel wool cannot be acquired, perhaps diataneous silica, abrasive leaves from plants in salt marshes or graphite (as in a pencil lead) might work.  Graphite is commonly used as a lubricant but it can also perform as a mild abrasive.  The polished surface (bottom of can) is then propped up by rocks and pebbles until the sun is caught at a very small spot at the bottom edge of the can.  Very small pieces of tinder are then dropped onto the hot spot and should become hot enough to ignite.


Flashlight batteries & steel wool are a handy way to start a fire.  For many decades Boy Scout manuals have advocated this trick.  One strips a small ribbon of wool to the proper length to reach both positive and negative terminals, and then shorts it out.    Two 1.5 volt cells in series provide 3 volts, which is usually enough energy to make the steel wool glow red hot and then ignite.  Larger batteries will work also.


Another interesting fire starting method is the ingenious “fire piston”.  This device was discovered by Europeans visiting Indonesia in the 1860’s.  (* Indonesia incorporates Sumatra, Indo China, the Philippines, Borneo and about 17,508 islands in between).   The fire piston is thought to be an ancient device because of its wide distribution.  It may have resulted from the development of the blow gun or blow tube.  The fire piston works on the same principle as the Diesel engine.  A hand sized tube is fitted with a close fitting rod (piston).  For a tight seal the rod is fitted with a gasket of string or sinew and packed with animal fat or wax.  A small piece of tinder is placed in the dimple of the plunger, the plunger inserted into the tube, and a pump or two of the piston generates enough heat (through pressure) to ignite the tinder.  The fire piston requires careful construction and close tolerances, but it is apparently a very reliable device.


Yet another interesting proposal is that of using ammunition from a firearm to build a fire.  The notion of dumping ½ of the gunpowder from a cartridge out and stuffing in a bit of rag cloth to replace the bullet is suggested.   Ideally the propelled cloth should smolder long enough to ignite tinder (and excess gunpowder).  In practice however the cartridge primer usually just ejects the cloth and fails to ignite anything.  The process might work for someone under special conditions with the right firearm using a cartridge with the right powder.  Pistol cartridges use the fastest, most volatile nitrocellulose powders.  Shotgun powders use the equivalent of slower pistol powders.  Rifle powders use the largest grained, slowest burning powders of all; and therefore should be the hardest to ignite.  Firecracker or ‘flash’ power is faster than black powder and both deflagrate faster than nitrocellulose powders.  In any event, gunpowder from any firearm cartridge would compliment other tinder or help start a fire from an ember achieved by other means.  Gunpowder can be unpredictable or sometimes troublesome to ignite.   Many a fool has probably blackened his face or singed his eyebrows by patently holding a flame to the stuff.



In conclusion it is hoped that the reader will part with a few new notions.   Throughout history, fire and its heat have played an extremely significant role in the development of human technology and culture.  Refined heat allows us to create sophisticated materials, it extends our lives by allowing us to survive cold weather, it generates our electrical power and kills disease in our food before we eat it.  The reader should now be better equipped to initiate a fire in an emergency, possibly using one of several different archaic methods if absolutely necessary.   If unpracticed in these primitive skills however, he or she might soon discover that what looks so simple can actually be very difficult.xlighter4

* Bit of trivia:

It may not be appropriate to call someone a good Boy Scout just because they can start a fire.  

– The Boy Scout movement began 1908 in British Isles, and spread to America by 1910.  The society attracted urbanite boys and instructors more than countrified counterparts.  For a century now, the thirteen different editions of the Boy Scout handbook have traditionally placed more emphasis on controlling fire rather than starting it.  Never has skill in fire making been a requirement for scout advancement.  In 1911 the only fire related merit badge was for Firemanship, which focused upon extinguishing fires safely and avoiding panic. Today’s equivalent merit badge for Fire Safety differs slightly by including small requirements like that of igniting a camp stove prudently. 

– Where modern Scout manuals might have more images and information concerned with outdoor skills, yesteryear’s handbooks were more moralistically toned for building leadership, character and integrity.  When examining some of the older editions it would appear that people had a more defined set of ethics than they do today.  




Familiar Batteries



Although at first they may appear to be mundane topics for discussion, batteries are contributing in ever increasingly important ways to the modern lifestyle.   Items from spacecraft to even some failing human hearts depend upon batteries.    Although a myriad of battery types and chemistries exist, none are ideal or are very long lasting.   In almost every category the quest for a better battery solution, is an urgent one.   Here in this post an attempt is made to illuminate the construction, advantages or limitations, recycle-ability and where applicable the proper recharging of some of the most common consumer grade batteries.


The “dry cell” battery made its first appearance during the Paris 1900 World’s Fair.    From this evolved the zinc-carbon cells which were so ubiquitous during the 1960’s.   Such cells are not dry, but actually contain an electrolyte of moist paste.   So-called “heavy duty” (zinc-chlorine) cells began to appear which offered an improvement in performance by featuring purer chemicals and an electrolyte of zinc chloride.   In today’s marketplace zinc-carbon and zinc-chloride cells have been largely displaced by the more expensive alkaline cell.   So far, all of these cell types are progressive variations of the original Leclancé cell invented around 1866.  In some circles zinc-carbon cells are simply referred to as Leclancé cells.

Leclancé cell image modified from public domain source

The term “primary cell” denotes a battery which is intended to be thrown away and not recharged after use, while the term “secondary cell” denotes a rechargeable type.   Battery chargers for zinc-carbon and zinc-chloride batteries have been built in the past but their effectiveness was minimal and the process fairly pointless.  The components of these old style cells contain materials however which might be useful to an experimenter.   For example the cases of Leclancé type primary cells are made of useful zinc.  The carbon graphite rod at the cell’s center can be filed to a point at one end and when attached to a 12v automotive battery, can be used to expeditiously solder electrical connections in an emergency.


Furthermore both zinc-carbon and alkaline cells contain manganese oxide (specifically manganese (IV) oxide) ; a common inorganic pigment used in dyes, paints, ceramics and in glassmaking.   As long as 19,000 years ago in Europe prehistoric cavemen were painting cave walls black and dark brown with manganese oxide and were achieving umber, sienna & burnt sienna hues by mixing or cooking in with this, varying amounts of iron oxide.    Ammonium chloride (NH4Cl) composing the electrolyte of the zinc-carbon cell is made by the reaction of hydrochloric acid and ammonia.  This chemical has a wide range of applications.   Also known as Sal ammoniac, ammonium chloride can be found in food additives, baked bread where it acted as a yeast nutrient, salty licorice candy, cattle feed and in some cough medicines where it acts as an expectorant.   Ammonium chloride acts as a nitrogen source in some fertilizers, it can be found in the glue that bonds plywood and as a thickening agent in certain hair shampoos.   Ammonium chloride can clean a soldering iron.   It is used in some soldering fluxes and once upon a time in the past it was even used (along with the help of a little copper) to produce green and blue colors in fireworks.   Zinc chloride (in Heavy Duty cells) can also be found in non electrical, corrosive soldering fluxes.  Sometimes used as a disinfectant, in antiseptic mouthwashes and dental fillings, zinc chloride of higher concentrations can also dissolve cellulose, starch and silk.  Zinc chloride is also a frequent ingredient in military smoke grenades.


In a battery, energy density is the amount of energy stored in a given space per unit volume or mass.  Alkaline battery cells have 3-4 times the energy density and a much improved shelf life compared to a zinc-carbon cell.  Appearing on the market in the late 1960’s, these usually have an outer shell made of steel.  Although generally thought of as primary batteries and contrary to what might be stated on a label, alkaline cells can be recharged.  A small current at about 65mA, interrupted periodically will do the trick.  Commercial pulse chargers for alkaline cells are available but rare.   Alkaline cells get their name from the strong base of potassium hydroxide (caustic potash) used in the electrolyte.   Potassium hydroxide (KOH) is hygroscopic (with a high affinity for water) and is sometimes used as a desiccant.   Some shaving creams, cuticle removers and leather tanning solutions to remove hair from animal hides – employ potassium hydroxide.

The 9 volt battery is properly termed a ‘battery’ because it is composed of a bank of individual 1.5 volt cells.  The construction of the 9 volt battery has varied over the years but nowadays the most common assembly is of 6, rarely seen AAAA type cells.

Some other less common primary dry cells that won’t be discussed here include: the aluminum battery, chromic acid cell, nickel oxyhydroxide battery, silver-oxide battery, and zinc air batteries.  Again, the main difference between a primary battery and a secondary battery is the ease with which the chemical reaction within the cell can be reversed.   “A battery charger functions by passing a current through the cell in a direction opposite to that of the flow of electricity during discharge.”


It may be useful at this point to realize that the spiral wound (Jelly-roll or Swiss-roll) construction of some of the next battery cells to be mentioned and the construction of some capacitors can appear to be similar.  Over time in spiral wound batteries and capacitors alike, crystalline structure in the plate material or electrolyte eventually changes and causes complications.   Also the separators that isolate plates can deteriorate with age, eventually allowing opposing plates to make contact and short out.  When old devices like radios, stereos and TV’s stop working it is often discovered that bad capacitors caused the problem.   Simpler than a battery cell, a capacitor doesn’t produce electrons – it only stores them.  A capacitor can dump its entire charge in a split second whereas a battery cell discharges much more slowly.


Nickel–cadmium batteries (NiCd or NiCad) got their name from the chemical symbols of their electrodes.   The first NiCD battery was a wet cell created in Sweden around 1899.   Beneficial attributes of this type of rechargeable battery include its tolerance to being deeply discharged for long periods, the fact that it can withstand very high discharge rates with virtually no loss of capacity, its lower self-discharge rate, and its performance in cold weather.  Outdoor solar patio lamps are one application where NiCds work admirably.   Negative attributes of the NiCd type would include a phenomenon known as voltage depression (voltage depletion, “lazy battery” or “memory” effect).   Voltage depression in this case is attributable to increased internal resistance caused by metallic crystal growth in the cadmium.   Improper or unsophisticated recharging of NiCads is probably the main reason for the continuing decline of their popularity.  The surface of the cadmium plate in a good NiCD cell has a small crystalline structure.  When these crystals begin to grow then the surface area is reduced so voltage depression and loss of capacity result.  The crystals can grow large enough and sharp enough to penetrate the separator between electrodes.

* It is sometimes possible to temporarily reclaim “spent” NiCD cells or battery packs with a trick.  By zapping NiCd cells or battery packs with a strong DC current like that from a welder or automotive battery charger (where positive to positive and negative to negative terminals meet) the size and sharpness of the crystalline dendrites within the cadmium hydroxide electrode can be reduced and performance partially restored.   Even battery packs that seem to have been dead for several years can be recovered this way.  If not used constantly however these seem to return to their original dormant state much sooner than they should.  Nickel based cells have venting mechanisms to allow gasses to escape in the event of heat from overcharging.  Therefore the electrolyte might dry up.  As with other “dry cells” the electrolyte can migrate away from the terminals over time also.

There is not much that can be reclaimed from either a NiCd or NiMH cell.   Cadmium hydroxide is more basic than zinc hydroxide.   Cadmium (Cd / atomic # 48) is a rare, soft, ductile and toxic transition metal that is found in trace amounts in most zinc ores and is often collected as a byproduct of zinc production.  Sometimes replacing zinc for corrosion resistant coatings, cadmium electroplating of steel is common with aircraft parts.  Cadmium can be found in nuclear reactors where it controls neutrons in nuclear fission.   Red, orange and yellow paint and plastic pigments are often made with cadmium.

NiMH cells (Nickel-Metal Hydride) are similar to NiCd cells, having replaced the negative cadmium electrode with one of a hydrogen absorbing alloy.   Superior in some ways but not in others to NiCads, NiMH cells only arrived in the marketplace in 1989.   Having 2-3 times the capacity of a NiCd cell they are useful in high drain applications like the demands from digital cameras as an example.   NiMH cells however have a very high self-discharge rate (perhaps 30% a month).  That means that they lose their charge just by doing nothing.   NiMH cells exhibit much less apparent voltage depression or recharge “memory” than NiCd types but it can still occur.  Unlike NiCd cells, NiMH cells should not be deeply discharged (except upon occasion before recharging) and they should be kept “toped up” or recharged frequently.

 * The amount of energy expended by a typical “AA” alkaline battery is about 5,000 C (Coulombs -> 1C = about 6,241,000,000,000,000,000 electrons).  Rechargeable “AA’s” and some alkalines display the relative capacitance of a cell with a “mAh” (Milliamp hour) rating.  One mAh = 3.6 C and 1,000 mAh’s = 1 Amp hour = 3,600 C.  Often compared to the gas tank of a car, the voltage represents how much gas is being used while the mAh represents the size of the gas tank.  A car with a bigger gas tank will go farther but the bigger gas tank will also take longer to refill.    

*  The mAh rating stamped on an “AA” battery can be misleading if comparing different types of batteries.  New alkaline “AA’s” might have a 2,500 mAh rating, while rechargeable  NiCd’s or NiMH’s might only be rated at 1,200, 1,900, etc.  In high-drain applications (like digital cameras) however these rechargeable cells will far outlast the alkaline types even before being in need of a recharge.  Alkaline batteries are not designed for high discharge demands, and only deliver full capacity if the power is used slowly.    


* Some of the first “button” type battery cells were mercury or mercuric oxide batteries.  Used in hearing aids, watches, calculators and other small portable electronic devices mercury cells were popular and common between 1942 and the early 1990’s.  In the 1990’s the European Union and the United States began to legislate this type of chemistry out of existence.  Mercury cells had a 1.35 nominal voltage and high capacity, achieved from using an alkali electrolyte with zinc and mercuric oxide electrodes.

Cells in the Lithium battery family use lithium metal or lithium compounds in the anode but vary widely in choice of cathode and electrolyte.  Lithium cells offer higher voltage and larger energy density than most other battery types but they are also far more expensive.    Depending on its chemistry a lithium cell can provide 3.3 – 3.7v of nominal cell voltage (compared to: 1.5v for zinc carbon, zinc chloride and alkaline cells, or 1.2v for NiCd and NiMH cells).


Lithium Prismatic cells of monopolar or stacked configuration are similar to the voltaic pile in concept – with the positive and negative plates are sandwiched together in layers with separators between them.   A new way to construct multiple electrode cells is to arrange them in what is called a “bipolar configuration”.   This looks like a stacked sandwich or prismatic configuration but here the negative plate of one cell becomes the positive plate of the next cell.   Almost a play upon words is the term “bipolar” because of the historic use of this unusual metal in treating manic depression (more commonly referred to today as “bipolar disorder”).   More on this bipolar topic momentarily.

Because they are pressurized and may use a flammable electrolyte “Lithium-ion” batteries can be dangerous.   A standard lithium cell is not rechargeable but a lithium-ion cell is.  While lithium primary cells have electrodes (generally anodes) of metallic lithium, rechargeable lithium-ion battery (LIB or Li-ion battery) cells use electrodes composed of various materials impregnated with lithium ions.   [Some examples are: lithium iron phosphate (LFP), lithium cobalt oxide (LiCoO2), lithium nickel manganese cobalt oxide (NMC)and lithium manganese oxide (LMO)].   Some of the newest battery designs being contemplated by researchers are impregnating carbon nanotube cathodes with lithium on a nanoscopic scale (particles usually measuring between 1 and 100 nanometers).  In the near future we may witness the commercialization of the so-called “nanobattery”.

The capacity of Li-ion type rechargeable batteries will diminish substantially after a few years.   Li-ion cells don’t have a “memory” and don’t get confused by shallow discharges.   It is not wise to strain such a battery by frequently discharging it completely, nor is it beneficial to keep it fully charged all the time.   Quick discharges also place strain on this battery type.  Over time a regularly used Li-ion battery will suffer less capacity loss than one that is used infrequently.   These cells don’t like extreme cold but they hate hot temperatures.

Popular lithium-ion polymer batteries (LiPo, LIP) should connote cells built with a non liquid polymer electrolyte that does not leak.   Confusing the issue however, manufacturers soon expanded this meaning to include lithium cells with pouch type flexible polymer casings.

* Lithium is a very curious material.  It does not occur naturally in a pure state because it is a highly reactive alkali metal with one of the highest reduction potentials of any element.  With an atomic number of only 3, refined lithium metal would be so soft in could be cut with a knife and so light that it would float on water.  A comparatively rare element and strategically important material, it is hard to acquire and therefore costly.   The price of the metal has sky-rocked since WWII.  During that war lithium was mainly used as Hi-temp grease for aircraft engines.  Soon later it was used to stage man’s 1st nuclear fusion reaction (1952 / lithium transmutation to tritium).  In 1954 when mixed with hydrogen (as lithium deuteride) it composed the fuel of the Bikini Atoll (Marshal Islands / Castle Bravo) thermonuclear “H” bomb.  This particular test surprised its designers by being a far more powerful blast than expected (@ 15MT the greatest yield of any U.S. nuclear test) and also created international repercussions concerning atmospheric thermonuclear testing.  Low concentration but stable lithium hydroxide was stockpiled for many years due to its strategic value in the manufacture of hydrogen bombs.  In nuclear power plants or ship/submarine reactors lithium might be employed as a coolant by moderating boric acid in the absorption of neutrons.   In some underwater torpedoes a block of lithium might be sprayed with sulfur hexafluoride to produce the steam which cranks the propeller.  Lithium is used in heat-resistant glass and the manufacture of telescope lenses because lithium fluoride crystals have a very low refractive index.  Chosen for their resonance, lithium niobate crystals are used in mobile phones.  Lithium is used as an oxidant in red flares & red fireworks, as a flux for welding and soldering and as a fusing flux for enamels and ceramic glazes because it lowers their melting points. 

Sodium affects excitation or mania in the human brain so doctors and psychiatrist might often issue lithium as a mood stabilizer.  For treatment of bipolar disorder / manic depressive disorder, lithium affects the flow of sodium through nerve and muscle cells in the body.   The terms for this disorder denote uncontrolled mood swings from up to down or high to low and back.   Lithium treats the aggressive, hyperactive and manic symptoms of the disorder.   In humans amphetamines produce effects similar to the symptoms of mania and herein lay another interesting quality of lithium.   Apparently lithium battery cells (a cheap source of the metal) are frequently used as a reducing agent in the illicit manufacture of methamphetamine.  One recipe called the “Nazi method” requires anhydrous ammonia, ether, lithium and pseudoephedrine.  A more complicated recipe also uses lithium but substitutes anhydrous ammonia with ammonium nitrate, lye, salt and a caustic drain opener composed of sulfuric acid and a cationic acid inhibitor.  Double methylated phenylethylamine (Meth) and its precursor amphetamine are both built upon the plant derived alkaloids ephedrine and pseudoephedrine.   Ephedrine and pseudoephedrine (from the Ephedra distachya plant) are active ingredients found in several brands of effective antihistamine.

The largest producers of lithium are Chile and Argentina.  Large deposits of lithium have been discovered on the Bolivian side of the Andes and there is a lot of the metal dissolved in the oceans.  Acquired primarily from brine lakes, clays and salt pans where it is refined electrolytically, production of the metal is slow.  There is no standard spot price for the metal in a futures market or stock exchange.  China has become the world’s largest producer and consumer of lithium ion batteries.   Presently ever-growing in utility and popularity and expecting huge requirements of this metal in future electrical automobiles, market analysts predict that production of lithium will soon fall short of fulfilling its demand.


The construction of lead-acid automobile batteries has changed very little in the last 50-60 years.   A standard car battery has a conventional voltage of 12.6 volts achieved with only 6 cells, because the nominal voltage of each cell is 2.1 volts.   Typically in each cell alternating plates of different polarity {+ containing lead dioxide (PbO2) and (-) of plain lead (Pb)} are separated by nonconductive paper or synthetic dividers and surrounded by an electrolyte of about 35% sulfuric acid (H2SO4) and 65% water.  The electrolyte of a healthy cell should have a specific gravity of 1.265 @ 80°F.

*   During the discharge cycle of a lead-acid battery; the negative plates (lead) combine with the SO4 (of the sulfuric acid- H2SO4) to produce lead sulfate (PbSO4) and the electrolyte’s specific gravity goes down.   The electrolyte becomes weaker and the potential between ± plates diminishes.   Conversely, during the charge cycle electricity is passed through the plates forcing SO4 back into the electrolyte.   The lead sulfate is broken up as lead oxide and plain lead are re-deposited upon their respective plates.   The specific gravity and voltage (potential between plates) are re-elevated to the proper levels.

Industry nomenclature- lead acid batteries

Aside from common automotive batteries, buyers also have access to “maintenance free” batteries, “deep cycle” batteries, “hybrid” or “marine” batteries, “gelled” deep cycle batteries and “AGM”(Absorbed Glass Mat) batteries.   The chemistry of these differing lead-acid batteries remains the same but the quality or quantity of the components change.  “Maintenance free” batteries are usually just heavy duty versions of the same basic design.   Generally the construction is better, components are thicker and the materials are more durable.   Commonly the plate grids contain cadmium, strontium or calcium to help reduce water loss by reducing gas.   Such batteries are often closed systems (can’t add water or check specific gravity) and they are often referred to as “lead-calcium” batteries.

Automotive batteries are optimized to start car engines.  The hardest work they are expected to do is to start a cold engine on a cold day.  Hence they are constructed internally with many thin plates within each cell, to maximize surface area and therefore current output.  An automotive battery is designed to produce a large current for a short time.  Unless abused, a car battery is seldom drained to less than 20% of its total capacity.  Allowing this type of battery to drain beyond that point (or allowing it to self-discharge by not using it for long periods) can be very detrimental to the battery’s longevity.   By contrast “deep cycle” batteries as used in golf carts, electric fork-lifts and for boat trolling motors are optimized to provide a steady amount of current for a protracted period of time.  These can be deeply discharged (80% of capacity –although doing so strains the battery), repeatedly whereas an automotive battery cannot be.   Deep cycle batteries have fewer plates but thicker ones within each galvanic cell.  The plates have higher density plate material.  Less electrolyte and better separators are also used.  Alloys used for the deep cycle cell plates may incorporate more antimony than car batteries do.

* Lead acid batteries generally have two common ratings stamped upon them; CCA & RC.   Cold Cranking Amps (CCA) is the number of amps a battery can produce –for 30 seconds @ freezing temperature (32°F or 0°C).    Reserve Capacity (RC) is the number of minutes that a battery can deliver 25 amps at or above a 10.5 volt threshold.

Generally, a deep cycle battery will possess only one half to three quarters the cold cranking amps but twice to three times the reserve capacity that an automobile battery will contain.  A deep cycle battery can endure several hundred total (complete) discharge/recharge cycles whereas a car battery is simply not designed to be totally discharged.  This reserve capacity and discharge tolerance makes deep cycle batteries preferable to automotive types for the purpose of off-grid electrical storage purposes.  Any lead acid battery however will last longer if it is not allowed to discharge to a great degree.  A battery discharged to 50% every day will last about twice as long as one cycled to 80% of capacity daily.  For less strain and increased longevity, deep cycle batteries should probably be drained no more than 10% on a daily basis.

Hybrid” batteries or “Marine” batteries may be labeled deep cycle, but are something of an undesirable compromise.  “Gelled” deep cycle batteries offer a safer, less hazardous electrolyte in gel form but at a heftily increased price.  AGM (Absorbed Glass Mat) batteries incorporate a Boron-Silicate glass mat between plates.   Also called “starved electrolyte” batteries, the mats are partially soaked in acid.  These are less hazardous because they won’t spill or leak acid if damaged.  These sealed batteries also recombine oxygen and hydrogen back into water during charging.  The lifecycle of an AGM deep cycle battery typically ranges between 4 to 7 years.  Deep cycle Gelled and AGM type batteries can get pretty big and might cost well over $1,000 each when new.

All lead-acid automotive and deep cycle type batteries will eventually age or fail, but for a wide variety of reasons.  A normal automotive battery might age because lead dioxide flakes off the positive plate due to natural contraction and expansion during everyday discharge and charge cycles.  Shorts between plates, buckling of plates, loss of water, negative grid shrinkage, positive grid growth and positive grid metal corrosion can cause a battery to fail.  Battery aging can be accelerated by fast charging, overcharging, deep discharging, high heat and excessive vibration.  Acid stratification is a situation where weak acid is at the top and concentrated acid at the bottom of an automotive battery, and is a condition caused perhaps by a power hungry car that is not driven enough to fully charge its battery.  Sulfation is also caused by undercharging or by allowing a lead-acid battery to self discharge by sitting for a long period in an undercharged condition.   In a sulfated battery hard lead sulfate crystals will fill the pours and coat the plates.  In a few instances it may be possible to rectify sulfation in a battery but beware of false claims and salesmen selling snake oil.

* It is interesting to note that a lead-acid battery does not require sulfuric acid as an electrolyte, to work.   Alum (hydrated potassium aluminium sulfate) solutions work and alkali or base solutions may work as well.  An evident superiority of sulfuric acid is that it works as antifreeze by causing a significant freezing-point depression of water.   Alum solutions tend to crystallize as well as freeze.

Methanol fuel cell / NASA image

Methanol fuel cell / NASA image

Fuel cells

Fuel cells are similar to batteries in that they convert chemical energy into electricity.  Like battery cells, fuel cells have anodes, cathodes and electrolytes.  The main difference between the two is that the chemicals are self contained within a battery’s cell(s) but must be imported or fed to a fuel cell.  A continual supply of fuel and of oxygen or another oxidizing agent must be fed or input to the fuel cell to perpetuate its chemical reaction and electrical output.   Methanol, natural gas or hydrogen perhaps from these are the most commonly used fuel cell fuels.

Although “fuel cell technology” may seem like new buzzwords in the automotive industry an Allis-Chambers tractor was driving around under fuel cell power more than half a century ago.   A Welshman invented the first fuel cell in 1839 and below is one of his sketches.


Of the several types of fuel cells designed thus far the electrolyte (whether it be liquid or solid) chosen determines the composition of the anode, cathode and usually a catalyst as well.   Alkali fuel cells use an electrolyte of potassium hydroxide, operate around 350° F and require an expensive platinum catalyst to improve the ion exchange.   A  Proton Exchange Membrane fuel cell uses a permeable sheet of polymer as its electrolyte, works around 175° F and also requires a platinum catalyst.   Platinum catalysts are also required for Phosphoric Acid fuel cells, which use corrosive phosphoric acid as the electrolyte and work at about 350° F.   High temperature salts or carbonates of sodium or magnesium are generally the electrolyte of choice in Molten Carbonate fuel cells.  These fuel cells work at a hot 1,200° F perhaps and employ a non-precious metal like nickel as the catalyst at both electrodes.   Hotter yet, Solid Oxide fuel cells require an operating temperature of about 1,800° F before the chemical reactions begin to work.  The electrolyte in one of these cells is frequently a hard ceramic compound of zirconium oxide and the catalytic activity is enhanced by the complicated composition of its electrodes.


Homemade Batteries

In a previous post Luigi Galvani, Alessandro Volta, the voltaic pile and Benjamin Franklin’s coinage of the term “battery” were discussed.   The image above shows several ways to construct simplistic batteries.  Each of these examples exploit dissimilar metals and an electrolyte that can be either acid or alkali based.


In the image above a potential and usable current should be created once an electrolyte is poured into the can, except for one problem.  Beer and soda cans are spayed with a plastic polymer coating to prevent interaction of the beverage with the metal, and this coating would interfere with ion exchange.  In a battery cell the current carrying capacity (or power) is governed by the area of the electrodes, the capacity is governed by the weight of the active chemicals and the cell voltage is controlled by the cell chemistry.  While a strong electrolyte might produce more voltage it would also eat through the very thin wall of the aluminum can sooner.


The notion intended by the image above is that a PVC pipe holds an electrolyte (preferably a mild mixture of bleach and water) and electrodes of copper pipe and tin solder are used.  Obviously the anode and cathode must not make contact but the closer they are suspended together- the better the ion exchange will be.   House wiring (just called “Romex” by some American electricians) comes in both copper and aluminum versions and could also be applied in this fashion.


To prevent sacrificial damage of the electrodes when this type of primary is not being used, it would be beneficial to be able to remove the electrolyte.  This image above suggests a way to connect PVC pipe together so that electrolyte could be added or removed when necessary.   Eight cells would produce 12v if their nominal voltage was 1.5 volts each.


The antique Daniell cell (above) probably could have gone without mention here except that some artistic types might find it interesting.  Looking to eliminate hydrogen bubbles, Daniell (1836) came up with a battery cell that used two electrolytes rather than one.   Originally, solutions of copper sulfate (deep blue in color) and of zinc sulfate (or sulfuric acid) were separated by a porous barrier of unglazed ceramic (or plaster of Paris, later used by Bird).  Operation of a single cell (2 half cells) worked fine until the porous barrier became clogged up with deposited copper.   Later a ceramic pot inserted inside a copper jar separated the two solutions.  Because of the flow of current the ceramic eventually became coated with copper.  Even later variations of the Daniell cell included the Bird’s cell and the gravity or crow’s foot cell.  In the gravity cell the difference in specific gravity of the two solutions is all that is necessary to keep them separated.  The containers of such cells should not be jostled.  Gravity cells were the favored source of power for telegraph stations especially in remote areas, for about 90 years.  Their zinc electrode resembled a crow’s foot, the batteries were easily maintained by replacing simple components – as needed.   Modern incarnations of the Daniell cell incorporate a “salt bridge” (either a glass tube filled with a fairly inert jellified solution of potassium or sodium chloride, or filter paper soaked in the same two chlorides).  When breaching two separate containers, the salt bridge completes the circuit – allowing only ions to flow back to the anode.

Pomace wine


Making wine from fruit is very easy, usually much easier than making alcohol from grain.  In a previous post about yeast  it is proposed that early man discovered by almost unavoidable circumstance how to make this alcoholic beverage.  Although the basic process of winemaking is simple, making a consistent product from batch to batch or from year to year is more difficult and requires some science.  The physiological ripeness of grapes or other fruit, the effect of differing yeast strains and the development of tannins as wine ages can become complex subjects indeed.  This post attempts to brush by the more subtle aspects of winemaking, but still show the uninitiated novice that making a good wine can be a simple and rewarding task.  What will be referred to here as a “pomace wine” process seems to work well for white wine grapes and other fruit like peaches, plumbs and apricots.

“Must” is freshly pressed fruit juice which contains particles of skins, pulp, seeds and stems.  These solids in must are referred to as ‘pomace’.  The length of time that the winemaker might allow pomace to remain combined in the must can have a large influence in the final character of a wine.   The pigment and tannin content of a wine will be increased if the pomace is allowed to remain throughout primary fermentation.

This alternative pomace process differs from the more common practice of just squeezing and separating the juice from the pulp before beginning fermentation.   While grapes are used in this example the method probably even more applicable to wines made from most any other type of fruit.  The advantages of this pomace wine method might become self evident in terms of labor efficiency, by more desirable color and flavor in the final product and by the conversion of more sugars into alcohol.   After fermentation the wine is normally separated from the pomace by “racking” or siphoning only the clear wine from one container to another.   The leftover pomace will be rich in ethanol.  Water might be added to this residual pomace to make a second batch of wine or these wet solids might be distilled to create a “poor man’s pomace brandy” like Grappa.  If the distillation is added back to the clarified wine then a “fortified wine” (like Sherry, Port or Madeira) is created.

Grapes are easy

Yeasts thrive in a slightly acidic environment.  For wine the ideal acidity is about 0.6% which is roughly equivalent to pH 3.5.   Grapes generally come with close to ideal acidity for purposes of winemaking.  There are thousands of varieties of grapes and most will range between pH 2.80 to pH 3.84.   Fruits in general tend to be more acidic than vegetables.  Less acidic fruits like bananas and coconuts however would need to be amended with a little tartaric or citric acid prior to fermentation.  Acidity also comes into play later during the clarification of a wine.  Cloudiness in a wine is the result of suspended, electrically charged proteins & polyphenols.   To clear haziness in a wine, periodic racking, filtration and ‘fining’ or ‘clarifying agents’ can be employed.   This potentially complicated topic will be approached a little later.

Aside from having a low pH, grapes have a high monosaccharide sugar concentration.  Grapes have an abundance of easily accessible glucose & fructose which allow the ‘sugar loving yeast’ Saccharomyces cerevisiae to quickly flourish and perform its magic.  By contrast a grain wort has complex sugars or starches which require a “cracking” into monosaccharide form, before production of ethanol can commence.

1 Wash


In the above photograph the grape clusters are dunked in a mild Clorox (bleach / calcium hypochlorite) bath, next in a disinfecting sodium bisulfite solution and finally a rinse of pure water.  This process rids the grape clusters from most insects, arachnids, bacteria and wild yeast.   Finally the grapes were separated from the stems.

2 Process

grape104 (1)

Next the grapes were juiced in a food processor.   Some sources will discourage the thought of processing grapes in a blender, for fear of releasing undesirable tannins from crushed stems and seeds.  In this case however the stems were tediously removed beforehand and there is actually little probability of cracking individual seeds when the blending is done briefly and cautiously – just enough to liquefy the pulp.  Carefully controlled pressure would need to be applied in a commercial wine presses also – to avoid crushing the seeds.

grape104 (2)

Some winemakers might pour the must into a bag of cheesecloth to facilitate the easy removal of the pomace later.  Here though, the juiced pulp was simply poured into a sterilized fermentation bucket.  After the fermentation bucket was almost full then ¼ teaspoon of sulfur dioxide was mixed into the pulp and the lidded and rag covered fermentation carboy left to sit for 24 hours.  This kills remaining bacteria and wild yeast, some of which reside naturally inside the fruit.  It is important not to completely fill the fermentation bucket.   Leave an airspace of 2 or 3 inches at the top to reduce the possibility of an overflow during fermentation.   Also fermentation buckets like this have 6 U.S. gallons capacity; the excess volume is usually needed to fill a 5 gal glass carboy after a racking or transfer that leaves unclear sediments behind.

3 Oxygenate 


After the 24 hour waiting period the sulfur dioxide will have dissipated, being consumed by killing bacteria, trapping oxygen and reacting with aldehydes.   In the picture above the must has separated into sugar rich juice at the bottom and lighter pomace at the top.

4 Inoculation

Almost any type of yeast can be used but the choice will dictate the flavor profile of the wine.   Here a Canadian yeast known as ‘Lalvin  71B-1122’ was used although there are several other fine brands of commercial wine yeast to choose from.  While a Champagne yeast would produce more alcohol, this strain was picked because of its lower alcohol tolerance (about 14%).  By not consuming all the sugar from the grapes this yeast is expected to create a less dry and softer wine and to preserve or enhance the fruit flavor and add fruity esters.


Normally one could just sprinkle the yeast package over the must and stir it in, where with luck wine will be produced in about a week.  In this case however a yeast starter was created and used.   Creating a so-called ‘yeast starter’ is simply a means of ‘proving the yeast’ and of insuring a vigorous fermentation.   A couple of cups of juice were scooped out and the yeast added to that.  In a glass quart jar covered with a paper towel to allow oxygen to pass but protect against the introduction of airborne bacteria and wild yeast, with sugars to feed on the number of yeast in the starter can be expected to double every 3 hours.  With the yeast, 3 tsp. nutrient and 2.5 tsp. pectic enzyme were added to the starter solution at the same time in this instance.


* Pectic enzyme or pectinase breaks down the complex and stubborn polysaccharides (long chained sugars) found in pulp and skins. Pectic enzymes can also improve fining and filtering operations of high-pectin wines.

* Pectin is the jelly-like matrix which helps cement plant cells together.  It is a structural polysaccharide contained in the primary cell walls of plants.  Fruit ripens and becomes softer as the enzymes pectinase and pectinesterase break pectin down.  Pectin acts as a soluble dietary fiber which traps carbohydrates and binds to cholesterol in the gastrointestinal tract. Pectin separated and concentrated from citrus fruit is used as a gelling agent in jams and jellies.

* Yeast nutrient provides the vitamins, amino acids, nitrogen, potassium and phosphorus that yeast cells need to grow well.   Contents of packages labeled “Yeast Nutrient” may include: dead yeast, folic acid, niacin, diammonium phosphate, calcium pantothenate, magnesium sulphate and thiamine hydrochloride.   Homemade nutrient might be made from ammonium or potassium sulphate and ammonium or potassium phosphate plus a few vitamin B1 pills.   Plain un-sulfured molasses is full of vitamins and minerals.  In laboratories a drop of molasses water is commonly added to cultures in Petri- dishes; to stimulate yeast growth and reproduction.

While sodium bisulfite powder was used both as a sterilizing agent and as source of sulfur dioxide for wine in this instance,  Campden tablets are perhaps more popular.  Potassium or sodium metabisulfite Campden tablets are also used as an anti-oxidizing agent or to remove chlorine from water.  What Canpden tablets can and can’t do


By no means is it necessary for a winemaking novice to purchase or use a hydrometer.   The use of one though offers the brewer a little more understanding and control over the process of fermentation.   Hydrometers measure the specific gravity of liquids and different versions can be found to measure the amount of cream in milk, sugar in water, alcohol in liquor, water in urine, antifreeze in car coolant or sulfuric acid in a car battery.  Simply put for winemaking purposes here:  water containing sugar is denser than pure water and pure water is denser than ethanol. In the picture above: pure water in the beaker should read 1.000 but the fresh grape juice in the image reads a denser specific gravity of about 1.070.   This reading indicates a potential alcohol by volume (ABV) between 9 and 10% once the sugars are consumed by fermentation.  As fermentation commences the hydrometer will appear to sink in each sample, eventually reading less than the density of pure water.

5 Fermentation

Yeast cells reproduce in an aerobic (with oxygen) environment but create ethanol in an anaerobic (without oxygen) environment.   In this instance the fermentation bucket was lidded but allowed to breathe for another 24 hours before an S-shaped bubble airlock was fitted to the bung-hole.   Within 5-7 days about 70% or ¾ of the fermentation should be accomplished.   At this point (or when the specific gravity reads between 0.990 and 0.998) the young wine should be transferred to another container, leaving the pomace and sediments behind.   Either fresh water or additional fruit juice (if extra was acquired and refrigerated) should probably be added to the secondary container fill it.  This step is intended to reduce oxidization by limiting the amount of oxygen in contact with the wine.   Adding water to wine weakens it however while adding new juice might require the addition of more sulfides (which would stun the yeast).  The wine should be allowed to rest in the secondary for another 4 to 6 weeks or until it becomes clear.  At this point the wine can be bottled.

Advanced topics

Sulfides are added to wine at the time of bottling to keep it from spoiling or turning to vinegar later.   You don’t want to add too much sulfide to your wine however because it has an obvious smell and taste.  Some people have allergic reactions to sulfides but in general, health concerns regarding sulfide levels in wine are undecided.  The following link discusses how to accurately judge the proper sulfide level.  “ Should I add Campden tablets each time I rack my wine and how do I measure the level of sulfite in my wine?

This link can be ignored by the winemaking beginner but it is a good source of information.  The root url ( leads to a fairly through homepage dedicated to winemaking.   Winemaking Additives and Cleansers

White wines will generally clarify sooner than red wines.  Racking is the preferred method for clarifying wine but when haziness in the wine persist ‘fining’ or ‘clarifying agents’ can be employed.   Sparkalloid, Isinglas, egg albumen and gelatin are examples of positively charged finings whereas Bentonite and Kieselsol are negatively charged.   This link  provides more information about fining agents.


In conclusion, making wine with the pomace rather than without it is an alternative method which can offer several advantages.   Firstly this method does not require a grape press or an antique food mill or grinder.  This process also offers options for modifying a wine’s flavor and color profile which would not be available by the press method.   The pomace once separated from the wine can be re-hydrated to make a second wine or the intrepid individual might choose to produce a fortified wine or pomace brandy by utilizing these normally discarded solids.



Antennas (simple radio #2)

* Note to self:  The time for a new post is long overdue but it is not as though I haven’t had other distractions to keep me occupied.  Last week for example I had to chase the same bear out of camp three separate times during the night.  The next morning it was determined that the bear had confiscated a roll of sausage, a stick of butter, a box of cookies and a bag of marshmallows.


Generally, any antenna that is used to receive RF (Radio Frequency modulation) is capable of adequately transmitting that same RF.   Sprouting from the Italian word for the longest or central tent pole supporting a tent, “antenna” entered radio vernacular sometime after 1895 when Marconi (camping in the Alps) supported his radio’s aerial from the pole.   Aerial and antenna are usually synonymous and both are simply transducers or implements which convert one type of energy into another.   The word “aerial” however is sometimes used to refer to only a rigid vertical transducer.

* Antennae is a seldom used plural form of the noun – antenna, and might most frequently be encountered when discussing bugs.  Depending upon the type of insect, antennae might be used to feel, hear, smell, or even to detect light.  Apparently male mosquitoes employ their antennae to hear female mosquitoes from as far as ¼ mile (400m) away.

Radio antennas are thought of as being directional or omni-directional.   A directional antenna will prefer to radiate in, or receive from one direction more than it will in any other.   A vertical rod or isotropic radio tower supposedly radiates in all directions equally.  No aerial is perfectly isotropic (omni-directional) however.   In the case of a vertical tower there is a blind cone or null lobe straight up and another straight down where radiation is not sent or where reception is absent.   In the same fashion, there is no antenna that is perfectly directional.  A pictorial depiction of a directional antenna’s radiation pattern usually shows particular zones as being elongated lobes.  There are main lobes, back lobes, side lobes and null lobes of radiation pattern.

  Gain is a concept unique to directional antennas and is a measure of efficiency.   Gain is the ratio of a directional antenna’s intensity relative to that of a hypothetically ideal isotropic antenna.  A low-gain antenna sends or receives signals partially from several directions while a high-gain antenna is much more focused.   Both types have their advantages.   A high-gain antenna may need to be carefully aimed or pointed towards its target, to work.  That achieved, a high-gain antenna has a longer range than a low-gain type.   It’s a “conservation of energy”; less energy is wasted by radiating in useless directions.   Modern household satellite dishes for TV reception are examples of high-gain antennas.   Antennas on cell phones and Wi-Fi equipped computers however are low-gain types, which enables them to receive signals from many directions.


The parabolic shaped antennas used for satellite TV and radars, are usually associated with microwave frequencies.   The first parabolic antennas were constructed however, over 120 years ago when Heinrich Hertz used them to prove the existence of electromagnetic waves.   The dish or parabolic shaped element can be made of mesh, wire screen, sheet metal or mirror.   The dish is only a passive device; a reflector that collects signals and bounces them towards the active (cable connected) feed.   Monstrously huge parabolic antennas are used for radio telescopes.   Radio telescopes can be used to determine the composition of molecular clouds in space because when excited, individual molecules rotate at discreet speeds and emit radio energy as they do so.   Carbon monoxide likes to emit at 230 GHz for example.   These telescopes can be used to study all sorts of things:  black holes, radio-emitting stars, radio galaxies, quasars, pulsars, gamma ray burst, super novas and so on.   They can be used to track satellites, do atmospheric studies or to receive radio communications from distant traveling spacecraft like Voyager 2.

*  The VLA (Very Large Array) radio astronomy observatory is located in a remote area of N.M., just east of Pie Town, N.M.  The array is made of 27 independent parabolic dishes that stand about 10 stories high (82’or 25m) and are visible from space as little white dots.   Each independent dish weighs 209 metric tons (2,205 lbs x 209) and is mounted on a robust rail system (doubled – two parallel sets of standard gauge tracks) so that it can be moved.  The rails are configured in a “Y” shape.  To focus on an object or area in space the 27 dishes expand from a minimum of 600m at center to a maximum baseline radius of 22.3 miles.  These antennas can listen to a large chunk of the radio spectrum (from 74 MHz to 50 GHz / wavelengths 400 cm to 0.7 cm).  Computers are used to correlate the data from each dish into a single map; the VLA observatory itself is called an “interferometer”.  Occasionally the VLA is brought online to link with other radio telescopes around the country to form an even larger (5,351 miles long) baseline called the VLBA (Very Long Baseline Array).  These other antennas are located in Brewster, WA, Kitt Peak, AZ, Los Alamos, N.M, Owens Valley, CA, Fort Davis, TX, North Liberty, IO, Hancock, N.H, Mauna Kea, HI, and St. Croix, U.S. Virgin Islands.  On occasions when radio telescopes in Arecibo, Puerto Rico, Green Bank VA, and Effelsberg, Germany join in the whole affair is called the High-Sensitivity Array.  


Phased array radar antennas like the flat panel above actually house many small evenly spaced aerials.  The phase of the signal to each individual aerial is logically controlled, resulting in a collective beam from all the little aerials that can be amplified and focused in a specific direction almost instantly.   Quicker and more versatile than mechanically rotating antennas because they require no movement, phased arrays are also more reliable and require little maintenance.   Limited phased array radars have been around for 60 years but recent improvements and affordability in electronics has made them more commonplace.   Most new military radars being built today are phase array systems.   

* RADAR is an acronym coined during WWII by the U.S. Navy, from “Radio Detection And Ranging”.  Before that however, the British were calling the same thing RDF (Range and Direction Finding).  The most common bands used for radar are microwave bands (at the upper end of the radio spectrum between 1 GHZ and 100 GHz – the L, F, C, X, Ku, K  and Ka bands).  Radars used for very long-range surveillance however might use longer VHF frequencies starting at 50 MHz or UHF frequencies between 300 and 1,000 MHz (1 GHz).  


Omitting the simple aerial, some commonly encountered antenna shapes are shown above.  The most basic antenna type perhaps is a “quarter wave vertical”   (where the length of the aerial is ¼ of the wavelength targeted).   The simplest and most commonly encountered antenna however is probably the “dipole” antenna.   A dipole antenna is essentially just two elevated wires, pointing in opposite directions.   A dipole is fairly omni-directional unless its axis is parallel to the target emission.  A monopole antenna is formed when one side or one half of a dipole is replaced with a ground pane that is perpendicular or at a right angle to the remaining half.   A whip antenna correctly installed on a car for example, uses reflected radiation from the automobile’s body (the ground plane) to mimic a dipole.  In this instance the monopole will have a greater directive gain and a lower input resistance.

Grounding provides a reference point from which changes in waveform can be detected.  A radio tower that is constructed to transmit at AM frequencies for example must be grounded or be compensated for lack of ground, and its height or length of element is determined by the wavelength.  Certain ground soils allow good grounding to earth but others do not.  In the absence of a good ground an antenna can simulate a ground by adding drooping radials (additional elements hanging at 45°).  A typical Marconi antenna is a perpendicular ¼ wave aerial with a proper ground (perhaps the soil is moist, marshy, full of iron ore or otherwise conductive).  In this case the ground acts to provide more signal, adding the missing quarter to mimic a full half wavelength antenna.   Often two or more quarter wave antenna towers will be seen in the same vicinity.  Usually a group of similar towers like this is creating a directional array that transmits greater power in a certain direction.  Since AM broadcast (US.) wavelengths range between 1,826 ft. and 909 ft. in length it would be prohibitively expensive to erect a desirable full length or even half length vertical transmitting tower to hold up the element.  For economic reasons some large transmitting antennas therefore are laid out and polarized in the horizontal plane. 

The folded dipole is a variation of the simple dipole.  Folded dipoles are about the same overall length as a standard dipole but provide greater bandwidth, have higher impedance and can often provide a stronger signal.

  Loop antennas are generally used to conserve space.  The old TV set top “rabbit ears” often incorporated a loop in addition to the two telescoping, adjustable dipole elements.  Loops respond to the magnetic field of a radio wave, not the electrical.  A loop induces very small currents on each side of the loop and the difference between the two must be amplified usually, before any useful signal is fed to the receiver.   Loop antennas are very inefficient.  One useful property of the loop however is that is very directional, they pick up signals when positioned in one axis, but not another.  Most direction finding radios incorporate a loop antenna.   A loop by itself can determine the axis of a signal’s radiation but not forward from backward.   Direction finding radios were/are used in aircraft and boats or ships at sea to navigate with.  Modern civilian aircraft usually have an ADF (Automatic Direction Finder) box that is attached to a loop and sensing antenna combination.  In earlier days the loop was manual (turned by hand) and not automatic.  The non-directional, sensing aerial on a small aircraft might be a simple wire running from the tail, forward to the cabin.   The ADF’s electronics compares the two antennas (directional and omni-directional) to determine the signal’s phase (+/-) and therefore forwards from backwards.

Loopstick antennas (using ferrite rods) found in many small AM radios are actually examples of loop antennas.  Today “DX-ers” and radio hams might construct a shielded loop antenna, wrapping hundreds if feet of wire onto a spool.  Such an antenna would have the advantage of containing a half-wave or even a full-wave element in a small space, but it would be directional and introduce a new set of technical complications.

The Yagi- Uda antenna was invented by two Japanese scientists back in the late1920’s.  Early airborne radar sets used in WWII night fighters used Yagi antennas and were employed by almost everyone except the Japanese.  Yagi antennas have several parallel elements, some active (directors) and some not (reflectors).  The unconnected multiple elements help to improve gain and directivity.  The illustration shows a horizontally polarized, dual band antenna, once popular for analogue TV reception.  The whole thing is a combination of three separate Yagi antennas.  The longer elements are for VHF reception.  The shorter, closely spaced elements on the left half of the antenna were for UHF reception.  The shortest elements on the straight tail are directors and reflectors that act to improve the UHF gain and directivity.  The next longest elements (mounted on the vertical “V”) are UHF half-wave dipoles.  The longest elements on the right would be half wave dipoles, arranged in a “phased array” to pick up multiple channels.  Wavelengths of the FM and VHF TV bands are somewhere between 11’ and 9’ long.  The longest single element in this example would be about 5.5ft.

* Beware of salesmen selling snake oil.  There is no such thing as a digital TV antenna.  An antenna does not care how the wave is modulated; it does not distinguish between analogue and digital signals.  

* Although as of 2009 UHF TV is gone in the US., someone else will now transmit in those UHF bands (probably AT&T or Verizon).  The front half of these old antennas are still good useful for FM and HDTV reception if a local broadcaster is still transmitting on his legacy bandwidth.  The FCC is eager to grab this bandwidth and sell it to cell phone companies.  

Horn shaped antennas are commonly used at UHF and microwave frequencies.   Parabolic antennas (where the dish itself is just a reflector) often use a horn as the ‘feeder’.   Advantages of horn antennas include simplicity, broad bandwidth, fair directivity and efficient standing wave ratios.  A few large horn antennas were built in the 1960’s to communicate with early satellites or for use as radio telescopes.

Small antennae


Radio-Frequency Identification (RFID) tags are growing alarmingly in popularity and in sophistication.  This unregulated and potentially invasive technology broadcast identification and tracking information by using radio waves.  RFID tags generally come in three types these days:  active, passive and battery assisted passive.  New technology has enabled the miniaturization of these devices to a point where individual ants can host their own personal transmitter.  Many pets and livestock are either internally or externally tagged with RFID chips.  At least one version of a subdermal microchip implant (RFID transponder encased in silicate glass) about the size of a grain of rice (11mm x 1mm) was manufactured for use in humans until the year 2010.

A passive RFID tag requires an external electromagnetic stimulus before it can modulate its radio signal.   An active tag carries its own little battery and therefore transmits its signal autonomously.  A biologist might harness some animal like a sea turtle or wolf with this type of tag, and it would only broadcast for a limited time but for a greater distance.   A battery assisted passive (BAP / or semi-active) RFID tag sets dormant until stimulated, and its battery helps boost the range of the tag’s radio signal.

Even a simple, cheap passive RFID tag can hold up to 2 Kb of memory.  These contraptions use a simple LC tank circuit (a resonating inductor and capacitor).  Their antennas are designed to resonate within a certain radio spectrum.  Usually a RFID transponder resonates anywhere between 1.75MHz and 9.5 MHz – with 8.2 MHz being the most popular frequency.   Usually RFID chips work within traditional ISM (Industrial, Scientific and Medical) frequencies set aside for non-communications purposes.    ISM occupies reserved niches in the LF, HF, UHF and microwave frequencies that RFID tags can and do exploit, often without the need for a license.  The chip’s antenna picks up electromagnetic radiation from a reader or detector; converts that to electrical energy which powers the microchip which then reflects or broadcast any information held in memory-back over the same antenna.

* Passive tags, when used for electronic article surveillance are usually deactivated by frying the capacitor with an overload of voltage which is induced from a strong electromagnet at the checkout counter.  Also a few seconds inside a microwave oven will destroy most RFID chips.  Many retail items are “source tagged” at the point of manufacture, with the RFID device hidden within the packaging.  Since every vendor does not employ the same type of EAS system (or perhaps none at all) alarms can go off when customers carry or wear these still activated tags into other stores.  Some stores may deliberately not deactivate these tags; the motive of building a customer shopping database has been suggested. 


Big & rare

Up until 2010 when a certain skyscraper in Dubai was completed, the tallest manmade structure ever built was a half-wave radio mast.   Standing at 646.38 m (2,120.6 ft) above the ground and perched upon 2 meters of electrical insulator, this tower broadcast longwave radio (@ 227 kHz and later 225 kHz) to all of Europe, North Africa and even to parts of North America.   It was used by Warsaw Radio-Television (Centrum Radiowo-Telewizyjne) from 1974 until it collapsed in 1991.

The notorious ‘Woodpecker’ radio signal interfered with the world wide commercial and amateur communications and international broadcasting stations for about 13 years.  Transmitting with about 10 megawatts of power from an antenna that was about 50 stories high and 1/3 rd of a mile long (150m tall x 500m wide) the original Duga-3 antenna was nicknamed “Woodpecker” for the interfering sound  that it made.   It was using protected frequencies set aside for civilian use.   Operating from 1976 to 1989 the Woodpecker now resides within a 30 kilometer diameter region of exclusion surrounding the Chernobyl power plant.  The Chernobyl disaster occurred in April 1986 but apparently the Woodpecker continued to operate for another three years.

Their has been varied speculation about the purpose of the Duga-3 broadcast, including intentional broadcast interference, mind control experiments and weather manipulation.   These speculations are not without precedent.   The most plausible explanation of the Woodpecker signal however, is that it was simply a Soviet over-the-horizon radar (OTH) intended to detect ICBM’s at long range by bouncing itself off the ionosphere.  Apparently the Woodpecker was arrayed with other OTH systems like Duga-2 (also in the Ukraine) and a second Duga-3 built in eastern Siberia which points toward the Pacific.

A couple of videos filmed at this antenna which should provide an appreciation for scope and scale.

Climbing up the Russian Woodpecker DUGA 3 Chernobyl-2 OTH radar

Base jumpers sneaking into the ‘Zone of Alienation’ to jump from the antenna.


* During the ‘Cold War’ the term “International broadcasting” described broadcast pointed at or intended for foreign audiences only.   For 60 years now, RFE/RL (Radio Free Europe (RFE) and Radio Liberty (RL)) have been spreading anti-communistic propaganda and psychological warfare behind the ‘iron curtain’ using shortwave, medium wave and FM frequencies.  It would stand to reason that the Soviets might have wished to retaliate or block such popular broadcast.   Although mind control by radio signal seems very far-fetched, the Soviets are accused of having for many years focused microwave radiations toward the U.S. embassy in Moscow.    Perhaps the Soviets were attempting to slowly cook the Americans.  A more feasible explanation is that the microwave energy was being used to stimulate passive covert “bugs” hidden within the embassy.  In 1952 such a covert listening device now known as a passive cavity resonator  was discovered inside the U.S. Ambassador’s Moscow residence. This infamous creation known as “The Thing”  was designed by the Russian engineer and physicist Lev Sergeyevich Termen  and preformed its espionage, unnoticed for 6 or 7 years.  

* Weather manipulation using radio is theoretically feasible and supporting information will be included shortly.

Extremely low frequency (ELF) is an electromagnetic radiation range with frequencies from 3 to 30 Hz and wavelengths between 100,000 to 10,000 kilometers (62,137 miles to 6,213 miles) long.   Since ELF frequencies can penetrate significant distances into the earth and seawater they have been used by the U.S., Soviet/Russian and Indian navies to communicate with submarines at sea.   The British and French apparently also apparently constructed and experimented with ELF antennas.   Because of the extreme wavelengths, sending antennas need to be very large and the few examples that do exist are buried in the ground.  ELF transmissions were or are limited to a very slow data transmission rate (just a few characters per minute) and are usually just one way transmissions due to the impracticality of a submarine being able to trail an aerial behind it which was long enough to send a reply.   The U.S. Navy transmitted ELF signals between 1985 and 2004 from one antenna located in the fields of Wisconsin and another located in Michigan.   Due to environmental impact concerns involving everything from farmers concerned over their livestock’s behavior to disoriented whales beaching themselves en masse, the U.S. Navy abandoned its ELF effort.  They use something better now anyway.

* Miners and spelunkers can use technology called through-the-earth communications which utilizes the (higher than ELF) ultra-low frequency (ULF) range between 300–3,000 Hz.  

Plasma is conductive, ionized air or gas.  Using an array of antennas attached to powerful radio transmitters ionospheric heaters are used study and modify plasma turbulence and to affect the ionosphere.   Several of these ionosphere research facilities already exist (in Norway, Russia, Alaska, Japan and Puerto Rico) and are operated by organizations like SPEAR (Space Plasma Exploration by Active Radar), EISCAT (European Incoherent Scatter Scientific Association) and HAARP (High-frequency Active Auroral Research Program).   By heating or exciting an area of the ionosphere, air can be made to rise or to act as a reflector from which other radio transmissions can be bounced.  Theoretically then ionospheric research could, should or already does allow for enhanced radio communications, surveillance, long distance communications with submarines, weather modification and perhaps eventually even the transport of natural gas from the artic without the use of pipelines.  The feasibility of altering the course of the jet stream or of steering the course of a hurricane seems very real.  Readers wishing to learn more about this subject can find some information on the Internet.   They could start by following these two links:

Ionospheric Heaters Around the Globe – HAARP isn’t Lonely

Weather Warfare



Nomenclature in the world of knots is inconsistent in any language.  Within English some would stipulate that the tangles of cordage we commonly call knots should actually refer to only those things that are neither bends nor hitches.   Ideally a bend should join two ropes or lines together, whereas a hitch should attach a line to a post, ring, rail or something.  In general however, the term knot is used to encompass all three.


Some fundamental knot component terms include “working or tag end”, “standing line”, bight and loop.  In a bight the end and the standing line are parallel but in a loop the working end crosses over the standing part.  Other knot terminology might include: braids, bindings, coils, dog, elbow, friction hitch, lashing, lanyard, locking tuck, messenger, nip, noose, round turn, plait, seizing, sling, splice, stopper, trick or whipping.  A knot that has a draw loop is said to be a slipped knot, which is not the same thing as a proper slip knot.  When tying shoelaces for example two draw loops or bights finish the knot and provide easy untying.


The simplest knot of all is the “Overhand knot”.  Once tied in a line of rope or cordage, every knot reduces the static tensile strength or average breaking strength of that line, when tension is applied.  The proportion of knotted cordage’s breaking strength relative to its unknotted strength describes a given knot’s “efficiency“.  Efficiency is about the only common, measurable, descriptive term shared between knots, bends and hitches.  Most knots have an efficiency between 40% and 80%.  The overhand knot (ABoK#514) has an efficiency rating of 50%, which is poor because when stressed it reduces the strength of a line by half.

Several knots we are familiar with are ancient.  Long ago prehistoric fishermen were using knots to make gill, casting and trawling nets. In addition to practical knots, the ancient Tibetans, Chinese and Celts contemplated some very intricate and elaborate decorative knots.

There is by no means an authoritative categorization or listing of all knots.  Growing in acceptance, the closest thing to an authoritative list of working knots might be Clifford W. Ashley’s illustrated encyclopedia of knots.   First published in 1944, The Ashley Book of Knots list and numbers more than 3,800 basic knots, but this does not even come close to enumerating all the variants and ornamentals in existence.  There is a lively online forum on almost every subject related to knots – hosted by the International Guild of Knot Tyers.  Also there is a quick and handy online knot index which features images for some of the more common working knots.


* A tangential detour: Knot Theory

Lest the reader assume that knots are an overly simplistic or entirely trivial subject they should realize that the future advancement of computing may rely upon an underlying study of knots.  The speed of the fastest computers is approaching a limit due to the finite speed of the electron itself.  Any increased computing speed in the future may depend upon quantum field theory and statistical mechanics; mathematics that sprouted from a topology known as “knot theory” or the mathematical study of knots.  Knot theory is often applied in geometry, physics and chemistry. Topology is concerned with those properties that don’t change when an object is continuously stretched, twisted or deformed.  Topology involves set theory, geometry, dimension, space and transformation.  Topology studies spatial objects (objects that occupy space), the space-time of general relativity, knots, fractals and manifolds.  A mathematical knot is one where the ends are joined together to prevent it from becoming undone.  Inspired by real world knots, the founders of knot theory were concerned with knot description and complexity.  They created tables of knots and links (knots of several components entangled together).  Over 6,000,000,000 knots have been tabulated to date and obviously concise tabulation would be a task for a machine and not a human.


free to use or share filter



A surprising number of people are unfamiliar with or cannot tie a decent knot, when such a skill can occasionally prove to be quite handy.  A repertoire of only a dozen or so well chosen knots will stand the survivalist or Boy Scout in good stead with his contemporaries.  An effective working knot should have practical applications, it should be simple to tie and easy to remember and in most instances it should be easy to untie.  My subjective list of six of the most important and effective working knots include the slipped -slipknot, bowline, figure -8 (or Figure of Eight Loop), clove hitch, prusik knot and the trucker’s hitch.   The clove hitch and prusik knots are fundamental in that several useful variations have been built upon them.


The simple slipknot tightens as the hauling end is pulled and can become very tight and difficult to untie.  By “slipping” the knot with a bight or draw loop however, even the tightened knot will fall apart after a stout yank of the tag end.  This simple knot is appropriate in many applications including tying a hammock to a tree or fastening a horse halter to a post or rail so that it can be unfastened quickly in an emergency.


Many knots including the venerable bowline can be “slipped” in such a fashion.  For those people who encounter a mental block when trying to remember how to tie a bowline, there is an easily remembered right-hand–twist method to use.


There are many instances when a loop in the middle of a line is called for.  As an example, for safety a mountain climber might tie himself to a middleman’s knot in the center of a climbing rope.  While a simple overhand loop might suffice in this application – it could become difficult to untie after being stressed.  The addition of another twist to the overhand loop results in the so-called Figure of Eight loop which is probably more efficient and much easier to untie.  Some might consider the Figure of Eight loop (or Flemish loop) preferable to comparable mountaineering knots like the Alpine Butterfly, merely because it is simpler and easier to remember.


The granddaddy of all “ascending knots” or “friction hitches” is the venerable Prusik knot which was first created during WWI and named for its inventor.  The Prusik can be doubled (with 6 coils rather than 4) to produce more traction.  The younger Kleimheist also shown in the illustration below is also popular with modern day climbers.


Few good (simple) ascending knots for mountaineering can be tied with nylon webbing.  The Heddon and double Heddon knots shown next are exceptions that seem appropriate.


The Trucker’s hitch is an important and utilitarian cinching knot that is actually a compound construction of two other knots.  Disregarding friction, the Trucker’s hitch can tightly strap down loads on trucks, trailers, boats and pack saddles because it applies a 2:1 mechanical advantage.  The standing line employs a ring, carabineer or middleman’s loop while the cinch is tightened with the tag end.  After the cinch is drawn tight the pressure is held by pinching the bight with one hand, before finishing with a simple slipped overhand knot.


The finial knot (of the six most crucial selected here) is the excellent, general purpose ‘clove hitch’.  It is mentioned last because many admirable variations have been conceived from it, and illustrations of a few of those will follow.


Excellent for sacks and trash bags the “constrictor knot’ differs only slightly from the clove hitch, but holds more firmly.  It can be hard to untie unless intentionally slipped with a draw loop.



When wrapped around a tent stake the “taut line hitch” below is useful for tensioning a tent guy line.  To the right of that is a useful clove hitch variant that has no recognized common name or ABoK number.  Tentatively referred to as the wireline hitch here, the grip of this variant is superior to the taut line version.



A few more knots _ deserving honorable mention

Strong and efficient the ‘Palomar knot’ is useful for attaching large hooks, lures or sinkers to a fishing line.


The “Surgeon’s loop” is another simple and effective knot for attaching small lures or flies to a tiny mono-filament fishing line.  Knots like the surgeon and Palomar are cut away rather than untied after they serve their purpose.


The “Ossel hitch” is an ancient knot; no one knows how old. It is or was a simple, secure and effective knot used to suspend gill nets from a larger line.  Strangely the ossel hitch is not recognized in Ashley’s encyclopedia.  This may be because “ossel” is a Scottish word and was not that familiar when Ashley illustrated his book.  There is a similar but different knot in the encyclopedia known as the “Netline Knot” (ABoK #273) that hails from Cornwall on the southern coast of England.


This simple Anchor Bend variant below is easily remembered and is much more secure than the parent knot.


Finally, this old page construction below introduces a couple of utilitarian gripping hitches




This is a blog post and not an encyclopedia therefore most knots cannot be shown.  Returning to the off topic tangent of knot mathematics we come to a group of abstract ideas known as graph theory which foreshadowed or laid the foundation for topology.  The father of graph theory was a Swiss mathematician and physicist named Leonhard Euler.   Leonhard discussed a notable historical problem in mathematics called “The Seven Bridges of Konigsberg”.  The unsolvable problem was to walk through the city, crossing each bridge once and only once.  What is called Euler’s solution became the first theorem of planar graph theory.


* Back in 1735 the seven bridges of Konigsberg were real and that city was part of the Prussian Empire and bordered Poland on the Baltic. Konigsberg, Prussia became Kaliningrad, Russia (54°42’12” N, 20°30’56”E) sometime after WWI. After the breakup of the Soviet Union, Kaliningrad and surrounding province became physically separated from the rest of Russia. After another world war and the ravages of time only two of the original bridges from Euler’s time survive. Five bridges now connect the city and islands formed by the Pregel River.

A similar conundrum that Euler might have considered had he the chance is the hypothetical house with five rooms and sixteen doors. The object is for a person to walk through each door once, but one time only.


Finally we come to the perplexing Mobius strip and Trefoil knot. The naughty Mobius strip is something of a paradox. The single edge of a Mobius strip is topologically equivalent to the circle and mathematically it is non-orientable.


A physical Mobius strip can be constructed from a belt or strip of paper.  One simply grabs the two ends and gives one end a half twist before taping the two together in a loop.  The resulting surface then has only one side and one edge.  Imagine a miniature gravity defying car driving around the surface of the strip.  If the car began on the top side of the surface then its path after one revolution of the loop would place it on the bottom side of the surface.  Consider a bug dragging a paintbrush while walking along the right edge of the strip and making two revolutions of the loop.  We perceive two edges to the strip but realize there is only one.

M.C. Escher incorporated the Mobius strip in some of his graphical art.  In the real world recording tapes and typewriter ribbons have been spliced in the continuous-loop – Mobous strip fashion to double playing time or ink capacity.  Large conveyor belts have also been wrapped the same way, to increase belt life by doubling the surface area.  The Mobius strip has several curious properties.  A continuous line drawn down the middle of the loop will be twice as long as the same loop.  Cutting this paper loop down the centerline will produce one long loop with two twists (not two strips) and finally two edges.  Cutting this longer strip again as before, will produce two strips, each with two full twists and intertwined together.


In topology the “unknot” is a circle and the “trefoil knot” is the simplest knot. Named after the plant that produces the three-leaf clover, the trefoil knot can be tied by joining together the two loose ends of a common overhand knot, but this results in a knotted loop.  Although it doesn’t look very convincing when done with paper, a trefoil knot can also be constructed by giving a band of paper three half twist before taping the ends and then dividing it lengthwise.


Solar energy at home

Most of the energy we earth bound humans consume comes directly from the sun, exceptions being atomic fission and some types of chemical reactions.  Fuel oil, coal and natural gas energy that civilizations use exist because of the Sun’s previous contribution in the formation of those hydrocarbons.  Wind currents are caused by the sun warming the air and as thermals rise they are displaced by denser, colder air.  Likewise the sun’s energy is ultimately responsible for distributing snow melts and rainwater water to higher elevations, which create the kinetic energy needed to power watermills and hydroelectric generators.  On a small personal scale, more individuals are learning to exploit the sun’s energy to heat their homes, generate their own power or to cook their food.  The two main methods of acquiring power from the sun are photovoltaic (PV) cells and thermal energy collectors.

Almost 53% of the energy in sunlight is absorbed or reflected before it even hits the surface of the earth.  The glazing or protective substrate in a solar collector can further diminish the amount of energy obtained.  Even the best solar panels can be considered to be inefficient.  The amount of energy collectible by a given solar panel is subject to many variables.  Whether talking about heat or electricity we generally measure that energy in units of Watt-hours (energy = power x time).  Under the best and brightest conditions a panel might collect as much as 2,000 Watts per sq. meter but under realistic or averaged conditions the expectation might only be half that.  During the daylight hours of a normal summer day at 40 degrees latitude, a solar collector would be doing good to average 600 Watts per sq. meter.  In wintertime for the same location the same collector might gather an average of only 300 Watts per sq. meter.  For any random location around the earth the average collectible solar energy per mean solar day (24 hours) is only about 164 Watts per square meter.


Overview of PV

In a photovoltaic solar cell an electrical charge is generated when photons excite the electrons in a semiconductor.  There are many types of solar cells and even some new developments in technology which will hopefully lead to the future manufacture of more affordable photovoltaic solar panels.  The warmer the photovoltaic solar panel gets the less power it can produce.  Essentially the temperature doesn’t affect the amount of solar energy a solar panel receives, but it does affect how much power you will get out of it.

The most common photovoltaic solar cells are made by chemically ‘doping’ a very thin wafer of otherwise pure monocrystalline (single-crystal) silicon.  In a delicate and complicated process of fabrication, wafers of silicone are generally cut or sliced as thinly as possible (before they crack) to a thickness of about 200-micrometers or the width of a typical moustache hair.  Since each individual solar cell produces only about 0.5V, several cells must be wired together to produce a useful photovoltaic array.  Mostly produced in China, commercial photovoltaic solar panels are very expensive, averaging $2 – $3 cost for every single watt they produce.  An average U.S. residence consumes something like 30.6 kWh per day, 920 kWh per month or 11,040 kWh /year.  In a country like the U.S. where grid power is comparatively cheap (averaging 10 cents per kWh in 2011) it would take a very long time for photovoltaic panels producing equivalent energy to pay for themselves.  In the meantime an individual with a “do it yourself” mentality can more directly utilize solar energy by fabricating his own contraptions to collect heat.


Solar Ovens

Although it would not be considered a quick process, it is easy to cook food with direct sunlight.  Slow cooking oftentimes creates superior dishes with the best blend of flavors.  Some heat trap type solar ovens can easily produce temperatures over 250 deg F; sometimes up to 350 deg F.  No matter what type of oven is used however (electric, gas, solar, smoke pit or Dutch) a good cook knows that slow cooking with a modest heat over a long period, will make an otherwise tough piece of meat more tender.


Essentially there are only two types of solar oven; those that entrap heat or those that reflect it.  To form a simple ‘heat trap’, a cardboard or wooden box can be insulated, spray painted black inside and then lidded with glass or clear plastic.   It helps when the cooking vessel itself is dark also – to better absorb solar heat.  In addition to being dark, it helps when pots are thin and shallow and have tight fitting lids.  Even glass mason jars make useful solar cooking utensils.  These can be spray painted black and the lids can be unscrewed a bit to allow vapor pressure to escape.   It might seem that parabolic or concave reflecting cookers would be complicated to construct, but some examples have been made by simply surfacing the inside of umbrellas or parasols with aluminum foil.  Mirrored Mylar or similar BoPET films are also useful materials in this type of application.  Doubtless many examples or ‘instructables’ detailing the construction of reflective type solar ovens, exist elsewhere on the Internet.  Some specially constructed reflective ovens claim to be able to reach temperatures of nearly 600 degree F.

The importance of cooking some foods, especially meats, is to kill bacteria.  Bacteria won’t grow below 41 deg F or survive above 140 deg F.  The internal temperature of meats needs to reach a range between 140 deg F and 165 deg F to be considered safe.  Seafood needs to be cooked to 145 deg F or hotter.  To rid poultry of salmonella, poultry must reach 165 deg F on the inside.  Egg dishes should reach the same temperature.  Trichinosis is halted by cooking pork to about 160 deg F.   Ground beef should reach 155 deg F for safety.


Solar stills

Back in the 1960’s a pair of PhD’s working in the soil hydrology laboratory for the USDA invented a solar evaporation still that could suck useful drinking water out of the ground.  Even in the arid desert around Tucson, Az. where they were located, they realized that the soil entrapped useful moisture.  Such a solar still is made by digging a pit in the ground, placing a collection pot in the bottom and covering the hole with a sheet of plastic.  Additional moisture could even be gathered by placing green vegetation under such a tarp.

It seems that the first evaporative solar stills were invented back in the 1870’s to create clean drinking water for a mining community as explained in an earlier post in this same blog named “The Nitrate Wars”.   This same distillation where moisture is evaporated before the condensation is collected, is employed in affordable, plastic-vinyl inflatable stills that can equip small boats and survival craft at sea.  Where once stranded fishermen and sailors faced a death by dehydration they now have the opportunity to create the drinking water they need from seawater.  Muddy or brackish germ infested groundwater can be reclaimed in the same way.


There are several possible techniques to employ and efficiency factors to consider when fabricating an evaporative solar still.  Obviously good direct sunlight is essential to their efficient functioning.  The ‘basin type” solar still is the most common type encountered and somewhat resembles a heat trap solar oven.  In a “tilted wick” solar still, moisture soaks into a coarse fabric like burlap and climbs the cloth before it eventually evaporates.  In higher latitudes ‘multiple tray’ tilted stills can be used, where the feed water cascades down a stairway of trays or shelves, allowing closer proximity to the glass and enabling steeper tilt angles for the panel to capture optimum sunlight.



Other liquids besides drinking water can be refined in an evaporative solar still.  Ethanol can and has been concentrated from mashes, worts, musts or washes using a solar still.   Since a distiller usually desires more direct control over temperatures however, he might consider solar stills to be practical only for so-called “stripping runs”.   Some of the earliest perfumes were created from fragrances collected by distillation.   Soaking wood, bark, roots, flowers, leaves or seeds of some plants in water before distilling the mixture, is a common way of obtaining aromatic compounds or essential oils.   Not all plant fragrances should be distilled but eucalyptus, lavender, orange blossoms, peppermint and roses commonly are.   The lightest fractions or volatiles of petroleum (like gasoline) separate at temperatures available in solar stills, but the heavier ones will not.  Theoretically it should be possible to place slip or crude oil into a solar still to separate out the gasoline and higher fractions.


Solar water & air heating

Most readers will have experienced how water trapped in a garden hose will get hot on a summer day.  Portable camp showers are simple black water bags, suspended at a little elevation and in direct sunlight to warm the water.


Where climatic conditions permit people may employ gravity fed or pump pressurized waterlines and tanks on rooftops or simply along the ground to achieve the same solar water heating effect.  Others may construct or install dedicated solar heating water panels to heat swimming pool water or to pre-heat water before it enters their home’s gas or electric water heating tank.


The construction of a solar water heater and a solar air heater can be very similar in concept.  Basically air or water is conducted through pipes or conduits to a panel where the heat exchange takes place.  Copper pipe might be the most desirable material to use in a solar water panel because of its pressure holding ability, resistance to corrosion and longevity.  Thin walled pipes of cheaper metals can be used to adequately exchange or transfer heat to air that passes through them.  A growing fad in the construction of homemade air-heating solar panels is to build the collector with empty aluminum beer or soda cans.  The tops and bottoms of the cans are punched or drilled out and the cans are glued together to form a continuous airtight pipes.  The box that holds everything is well insulated (sides and bottom) every interior surface exposed to sunlight is spray painted a dark, sunlight absorbing color – preferably using a high quality, high temp, UV protected paint.  A transparent glazing (of glass, plastic, fiberglass, Mylar, acrylic, polycarbonate, etc.) is tightly sealed over the top of the trap.  A double or even triple layer of glazing is preferable to a single one to reduce the escape of thermal heat.  While beer and soda cans are popular because of their availability and affordability, equally efficient collectors could be made from tin cans (made of metal called tinplate), rain gutter downspouts, old aluminum irrigation pipes, single walled stove pipes or even from bug screen like you’d find on a window.  This site, chosen from many that discuss solar heating with air, suggest that bug screen collectors are on par with soda can collectors and are possibly easier to construct.

In the choice of fan or blower used to push or pull air through the system, it is preferable to circulate a large volume of modestly heated air rather than a small quantity of thoroughly heated air.  Ideally a solar panel can increase the heat of the air passing through it as much as 50 or 60 degrees F.   In this type of collector an optimum airflow rate of 3 CFM per square foot of absorber has been suggested.  In general the larger the solar air panel, the better – small ones are probably not worth considering.  They should be built with quality paints, glazing and other components where possible to resist corrosion and decomposition from sunlight and other climatic elements.

Pointing solar panels


For optimum efficiency any solar panel should face the sun at a perpendicular angle.  The position of the sun changes constantly however throughout the day.  Some institutions or uber rich people might purchase solar trackers which employ servo or stepper motors to keep photovoltaic panels aligned with the sun.  Such ‘trackers’ increase overall efficiency by increasing morning and afternoon light collection.  The rest of us however have to make do with permanently fixed or periodically adjustable panel mounts.  Normally the bases of fixed panels are aligned perpendicular to due (not magnetic) south.  Some owners of grid tied solar photovoltaic panels however are deciding to aim their panels towards the west.


The effectiveness or efficiency of a given solar panel is definitely affected by its proper orientation to the sun but as the sun moves around a lot, solar panels that do not automatically track its movement must seek a positional compromise.  The sun’s apparent altitude in the sky changes throughout the year.  Because of the earth’s motion the sun’s altitude appears to vacillate 23.5 degrees between summer and winter solstices or every 6 months.  Solar panels near the equator can be positioned parallel with the horizon and largely remain efficient by just pointing straight up.  The further a location is from the equator the more vertical a panel’s ideal tilt becomes.  Above the 45th parallel, vertically fixed solar panels mounted to the side of a building can preform admirably in the wintertime.  There is no one perfect tilt angle with which to keep a solar panel perpendicular with the sun’s rays throughout the year.  This fact motivates some people with adjustable panel mounts to periodically climb up on their rooftops with wrench in hand to refine panel tilt.  Others might wish to install a solar panel permanently in the best year round average position and not worry about adjustments.

Older literature for solar panel installation might quote a rule of thumb where 15 degrees are added to latitude for wintertime panel tilt, or 15 degrees of angle are subtracted from latitude to acquire summertime panel tilt.  A more modern set of calculations being mimicked or repeated often around the web, suggest wintertime tilts that are a bit steeper than common to capitalize on midday rather than whole day solar gathering and flatter than normal summertime tilts favoring better whole day rather than midday collection.

-To calculate the best angle or tilt for winter:

(Lat * 0.89) + 24º = ______   (The latitude is multiplied by .89 and added to 24 degrees)

-The best angle for spring and fall:
(Lat * 0.92) – 2.3º = ______

-The best angle for summer:
(Lat * 0.92) – 24.3º = _____

-The best average tilt for year round service:
(Lat * 0.76) + 3.1º = _____

For the purpose of illustration a latitude of 35 degrees North will be chosen.   Locations somewhat close to this latitude include: the Straight of Gibraltar, Tunis Tunisia, Beirut Lebanon, Tehran Iran, Kabul Afghanistan, Seoul Korea, Tokyo Japan – and in America, cities along Interstate 40 or the old Route 66 (Raleigh NC, Memphis Tennessee, Fort Smith AR, Oklahoma City OK, Albuquerque NM, Flagstaff AZ and Bakersfield CA).





A couple of sources for more information:



Metrification for the masses

*  When they weren’t lopping off every other person’s head in France during the revolution which began in 1789, reformers in that country seized the opportunity to make all kinds of other acute changes.  In 1791 for instance, the French Academy of Sciences was instructed to create a new system of measurements and units.  For two centuries now the rest of the world has been brow beaten and cajoled into adopting this sublime system of weights and measures and this process is called metrification.  While most nations have capitulated to the apparent intellectual supremacy or empirical advantages of the metric system, there are still some holdouts in the world.  After two centuries these non-metricated miscreants still drive the more rabid reformatory zealots of metrification, nuts.  Perhaps there are logical reasons in a few instances, not attached to loyalty or laziness, that compel these non-metric holdouts to hang onto some traditional weights and measures.

*  Feeling particularly erudite the reformatory French academics chose to base this metric system on natural values that were unchanging and reproducible, and to use numerical units based on the powers of ten.  Unchanging natural values were hard to coral back in 1791 so the official definitions of all the basic metric units have undergone several changes since then.  The metre is the most fundamental metric unit and from it the other units were originally derived.  American dictionaries, spell checkers and text books won’t even spell the word right.  Technically a “meter” is just a measuring device.  If you’re going to adopt French units you might as well swallow their spelling.   Like the non-metric nautical mile, the metre was originally conceived as being a portion of the earth’s circumference.


*  While the older nautical mile was defined as a minute (1/60th of a degree) of arc along a meridian of the Earth, the new metre was conceptualized as being 1/20,000,000th part of that same meridional distance.  Even before the oblatness of the earth was realized, French surveyors in the 1790’s determined a very fair approximation of what a metre should be.  Since that time the length of the metre has grown 0.2 mm longer.  Today most air and sea navigators still prefer to use non-metric nautical miles rather than kilometers because when using charts (nonlinear, 2-dimensional, mercator projections or maps) it makes life a lot easier.

*  It quickly became self evident that the intended international reproducibility of an accurate metre using the meridional definition was so impractical that a physical artifact had to be produced. In 1799 a platinum bar called the “mètre des Archives” was made and used as a copy reference.  In 1875 “Convention du Mètre” or Metre Convention was instituted to oversee the development of the metric system.  Conceived at the same time CGPM (“Conférence générale des poids et measures” or the General Conference on Weights and Measures) was established to democratically coordinate international participation by holding meetings every 4-6 years.  Broad acceptance of metrification did not really begin to take hold until after WWII and the creation of the European Union. SI or “Système International d’Unités” is today’s official name for the metric system as ordained by the CGPM in 1960.

Confusion and inconstancy

*  There are inconsistencies in the metric system.  The redefinitions of base units have been frequent.  The SI crowd has begrudgingly adopted non decimal units like seconds of time because they can produce no better alternative.  The SI intellectuals have regularly discouraged the use of seemingly compatible units and nomenclature simply because they themselves did not originally create or sanction them.  These same intellectuals have also adopted redundant and unnecessary units and nomenclature when simpler alternatives already existed.  Some unpopular and clumsy sounding SI units are floating around.

*  The currently approved MKS (metre, kilogramme, second) system of units supplanted the older CGS (centimeter, gram, second) system.  It was once simple to think of a gram in terms of the weight one cubic centimeter of water at the melting point of ice.  Although originally a base unit the litre (or liter) is no longer even an official SI unit!  The kilogram originally equaled the mass of a litre (1,000 cubic centimeters) of that same cold, pure water.  Obviously these definitions were not good enough because they no longer apply.  The kilogram is the only metric base unit that hasn’t been redefined in terms of unchanging natural phenomenon.  The authoritative kilogram is an object!  You can’t just produce an accurate kilogram in your laboratory located in Timbuktu.  In a dark vault somewhere in Paris sits a precious SI manufactured artifact.  Today’s official kilogram is a cylinder of 90% platinum and 10% iridium alloy.  Where once the metre was defined as one ten millionth of the distance between the North Pole and the Equator, it was eventually redefined as a multiple of a specific radiation wavelength.  Today’s official redefinition of the metre is as a fractional part of the distance traveled by light in a vacuum.

*  The concept of time and the replacement of legacy time units with suitable modernized counterparts have vexed zealous metric reformers for two centuries.  Years, months, weeks, days, hours, minutes and seconds are not decimally related.  We are gifted with impressive sounding terms like nanoseconds, kiloseconds or milliseconds but the ‘second of time interval’ is adopted by the metric system; it was not an original metric unit.  It was belatedly defined by SI as being: 1/86,400th of a mean solar day.  The metric second was later redefined in terms of astronomical observations.  Even later the second was redefined by the oscillations of a tuning fork, and then again by oscillations of a quartz crystal.  Today the SI second is officially defined as- “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom”.  Who knows how they’ll redefine the second, next year?

*  Whereas a regular non-metric Imperial ton (or short ton) weighs 2,000 lbs, a ‘long ton’ or ‘gross ton’ typically used in shipping cargo weighs 2,240lbs.  A “metric ton” or “tonne” weighs 1,000 kilograms or 2,204.6 lbs.  When appending the prefix “kilo” to ton, things start to get confusing.  In terms of explosive force a kiloton might mean the equivalent of 1,000 metric tons of TNT.  As a unit of weight or mass however a kiloton might mean either 2,000,000 lbs or the same as a kilotonne (2,204,622.6 lbs).  A gigagram would equal a kilotonne but that term is infrequently used.

* The seven current hallowed SI base units are the metre, kilogram, second, ampere, candela, mole and kelvin.  The Kelvin scale is an absolute thermometric scale but units are not referred to as degrees.   We normally call increments of temperature “degrees” because of a decision made way back in 1724 by a German physicist named D.G. Fahrenheit.   The Fahrenheit scale divides the range between the freezing and boiling points of water into 180 equal parts – like the degrees in geometry for half a circle.   D.G. Fahrenheit also invented the glass/mercury thermometer.   About two decades later but still well before the French reforms a Swedish astronomer named A. Celsius, borrowed Fahrenheit’s idea but divided the range by only 100 equal parts.  Originally Celsius’s scale ran backwards or counter intuitive to today’s usage but that situation was reversed after his death in 1744.   From 1744 to 1948 the units of what we now call the Celsius scale were better known as degrees of “centigrade“.   Eventually an Irish/British physicist named W.T. Kelvin comes along with further suggestions for improvement.   The Kelvin scale begins at absolute zero – there is nothing colder.   To make the Kelvin (K) scale fit in with the decimalized Celsius scale, the triple point of water (where gas, liquid, and solid phases of water coexist in thermodynamic equilibrium) had to be defined as exactly 273.16 K.   In other words the base ten loving SI / metric system uses for one of its base units, values derived from a very inconsistent fraction (1 / 273.16 – whose ungainly reciprocal is 0.003661).

Discouraged, clumsy, slang or unneeded

*  Created by a Swedish astrophysicist the tiny increment of length called an angstrom is exactly equivalent to 0.1 nanometre or 0.000,000,0001 metres.  Mention however of the non-Imperial and non-metric but internationally recognized angstrom is officially discouraged by the SI’s International Committee for Weights and Measures.  The small calorie itself was a pre-SI metric unit of energy defined in the 1820’s as the energy needed to raise 1 gram of water, by 1º C.  The dietary calorie (kilogram, large or food calorie) is 1,000 times larger.  The small calorie is obsolete or replaced in preference by the official SI “joule”.  Megbars, kilobars, bars, decibars, centibars and millibars of atmospheric pressure – are not SI units.  It takes 100,000 SI legitimate pascals to equal one bar.  One bar is roughly equivalent to one standard atmospheric pressure at sea level (14.69 psi or 101,325 pascals).  Meteorologists and weather reporters usually prefer to describe changes in air pressure in terms of millibars rather than in the exactly equivalent hectopascals; it just sounds better.  In oceanography during a decent from the surface, drop in metres and increased water pressure in decibars correspond nicely.

*  In an weak attempt to sound sophisticated a scientific journalist might employ the term “kiloannum” to impress his audience rather than use the simpler terms “millennia” or “a thousand years”.  Students might use the jargon “Fermi” rather than the more proper but awkward SI term femtometre to describe infinitesimal nuclear distances.  Attometres, zeptometres and yoctometres are smaller yet.  In astronomy where great distances are expressed one might seldom encounter the SI terms megametre, gigametre, terametre, petametre, exametre, zetametre or yottametre.  The most common vernacular one finds instead are the non-metric light year, parsec and astronomical units.  The “astronomical unit” (which is roughly the mean distance between earth and sun) was thought up in the 1970’s by the IAU (International Astronomical Union – also hosted by France) to patch up shortcomings in regular SI units caused when incorporating general relativity theory.  SI brings us gawky sounding terms like “gray” and “sievert”.  Theses terms were added to the dictionary not because they were necessary, but because they could be branded by SI authorities whereas “rad” and “rem” could not.  A gray is simply 1,000 times bigger than a rad and both units express energy radiated or absorbed.  A sievert is simply 100 times bigger that a rem and both units attempt to adjust radioactive dosages by accounting for type of tissue and type of radiation.

A Short Imperial unit background 

*  Maligned and criticized for still using Imperialistic old fashioned weights and measurements when the rest of the world does not, the American public has shown resistance to metrification.  Primarily a British colony in the beginning, America inherited British imperial units, which were in turn heavily influenced by historic French and even ancient Roman measurements and weights.  The avoirdupois system of weights that Americans favor was actually developed by the French.  The Troy weight system of units of mass still used in many locations around the world for quantifying precious commodities like gold, platinum, silver, gemstones and gunpowder – is also French (believed to be named for the French market town of Troyes in France).  Closely related to Troy weight, the apothecaries’ system of weights favored by physicians, apothecaries and early scientist has roots reaching all over central Europe and the Mediterranean.  The apothecaries’ system of weights was still being used by American physicians and pharmacist into the 1970’s.  After America separated from the British Empire the Americans kept the legacy units pretty much intact while the British did not.  Parliament by meddlesome act or decree and mostly for the purpose of increased taxation, continued to make small changes to certain units of mass and volume.  These changes caused much confusion between American and British (pre-metric) imperial units, which still exist today.


*  Without digressing too far from the subject of metrification: it should be explained that without the discrepancy between wine and beer cask and the British adoption (1824) and eventual retraction of the “stone” unit, that the impetus behind a one world metric system would never have been so great.  The legislated stone unit demanded a redefinition of several standard weights.  Today’s Imperial gallons, bushels and barrels are so screwed up because yesteryear’s hogsheads (drytight cask filled with wine, beer, liquor, whale oil, tobacco, sugar or molasses) were of different sizes.  A hogshead of wine has traditionally held more volume than a hogshead of beer.  In its defense Parliament did try to standardize hogshead volume back in 1423 but this had little effect.  Coopers at different locations made cask as they saw fit and eventually there became an accepted and even official difference in hogshead volumes depending on contents.  A multiplicity of different gallon, bushel and barrel definitions followed suit.  The UK Imperial gallon springs from the ale gallon but the U.S. liquid gallon is based upon the 1707, Queen Anne wine gallon.  Even today this curious distinction between wine and beer continues as the American BATF and Treasury Department require different labeling on the two beverages.  Wine and stronger spirits are labeled only in liters or milliliters while beer containers are labeled only in gallons, quarts, pints or ounces.

* The bushel used to be a measure of volume for grain, agricultural produce or other dry commodities.  Bushels are now most often used as units of mass or weight rather than of volume.  It should be realized that a bushel of each commodity in the mercantile exchange market is completely unique and different.  A bushel of corn weighs 56 lbs. but a bushel of soybeans or wheat weighs 60 lbs.  A bushel of plain barley weighs 48 lbs. but a bushel of malted barley weighs only 34 lbs.  A bushel of oats in the U.S. weighs 32 lbs. but across the border in Canada, it weighs 34 lbs.  Okra weighs 26 lbs. per bushel and Kentucky Blue grass seed only 14 lbs.  Many other commodities exist, whose specific values fluctuate according to the jurisdiction (country to country; state to state).  Pork bellies (the valuable bacon only) are traded by weight (one unit equals 20 tons of frozen, trimmed bellies).  The rest of the hog’s carcass in a commodities market is expressed as Lean Hog futures.  Refined oil might be shipped in 55 gallon drums (first created in WWII to ship liquids) but crude oil is measured and traded on the standard 42 U.S. gallon, historic common wooden barrels of yesteryear.  Barrels of other commodities often contain a volume of 31.5 U.S. gallons.


*  Where the Imperial system does not fail and probably needed no replacement is in its units of length, distance and area.  Imperial units of length were intuitively developed over the ages.  Metric units of length might be more easily abstracted numerically in calculations for pencil pushing types, but these are not nearly so instinctive for everyday usage.  Engineers and architects seldom have to build what they design; that labor falls to builders, millwrights, manufacturers, fabricators and others who work with real materials on a daily basis.

*  Consider the Imperial ruler or tape measure and its metric counterpart.  Working with fractions a fairly accurate Imperial ruler could be reconstructed by almost anyone given an empty room, a pencil, a pair of scissors and a strip or two of unmarked paper exactly one yard in length.  Feet, inches, half-inches, quarter-inches, eight-inches and perhaps sixteenth-inches could be adequately marked upon a blank yard long strip of paper.  In contrast it would quickly be realized that an adequate depiction of centimeters and millimeters could not be intuitively described upon a blank, metre long strip of paper.  There can be another eloquence in fractions.  Builders and fabricators familiar with feet and inches can often perform the type of mental arithmetic that would send their decimal loving metric counterparts scurrying for the nearest calculator or pencil.


*  Americans find many customary units desirable and appropriate.  Non SI unit terms like liquid ounces, shots, gills, noggins, fifths, teaspoons, cups, pints, quarts, gallons, barrels, board feet, pecks, bushels, BTUs, milibars, carats, cycles per second, pounds, ounces, troy ounces, drams, tons, caliber, mils, standard gauge, rods, chains, inches, feet, yards, furlongs, miles, nautical miles, fathoms, knots, picas, angstroms, light years, parsecs, acres, townships and sections remain in the American vernacular.  The sluggish progress in thorough American metrification has been excused as the result of ignorance, laziness or complacency by the public.  That may be.  Remember though that American schools have versed students in the metric system for the last 50 years or more.  We can use SI whenever we want to.  We’ve experienced strong-arm attempts to have SI foisted upon us as in the Metric Conversion Act and the Fair Packaging & Label Act.


*  Never the perpetrators of a bloody social revolution like the Russian one, or France’s where mobs decapitated anyone who thought differently or had money, Americans might simply resist metrification because they resist anything totalitarian by nature.  That’s what metrification is; a totalitarian ideal.  It request the wanton destruction, scourge, eradication and abandonment of any other competing form of weights and measurement.  So who’s the real bigot; the unassuming Japanese or American builder who finally learns how to use a conventional tape measure well and sees no reason to change it or some frustrated high school chemistry teacher who wants a dumb-ed down tape measure and for all other alternatives in the world to be immediately destroyed?

*  An otherwise thoroughly metricated country, Japan’s carpenters, builders and realtors still favor their shakkanho length measurements which were acquired from ancient China.  The shaku is the base unit and was originally the length from the thumb to the extended middle finger (about 18 cm or 7 in).  That length grew to approximately 30.3 cm, or 11.93 inches (Kanejaku or “carpenter’s square” shaku).  Floor space in a Japanese house is usually described in terms of a number of single traditional straw tatami mats or a square of two tatami mats (tsubo).  The koku, defined as 10 cubic shaku is still used in Japanese lumber trade.

*  In order to prevent fines and prosecution that other non-SI compliant merchants in Europe have been hit with, the British and Irish have seen fit to pass legislation which protects their traditional non-SI whiskey and beer rations (like gills, pints and Imperial gallons).  When it came to alcohol, it seems as if the rigors of metrification hit a little too close to home.   The UK decimalized its currency back in 1971 and it is the only EU member to have retained its own monetary system- which is also the oldest monetary system still in use.   Few things are as frustrating for a foreigner to comprehend as the meaning of old English tower pounds, sterling pounds, gold sovereigns, guineas, quid, fivers, coppers, crowns, shillings, sixpence, halfpennys, farthings and tuppence.  If there were space enough left in this post these could be explained.  Some of the old legacy Imperial units mentioned previously have very interesting backgrounds as well but explanations will have to wait.  The topic of this post has been the triumphant march of metrification and the liberating, joyful piece of mind and harmony it will bring to the world once its total acceptance is finally complete. —————————–

Captured from an e-mail years ago: somewhere an anonymous wit promotes these additional units – lest they become forgotten in the march of time also…

* 1 millionth of a mouthwash = 1 microscope

* Ratio of an igloo’s circumference to its diameter = Eskimo Pi

* 2,000 pounds of Chinese soup = Won ton

* Time between slipping on a peel and smacking the pavement = 1 bananosecond

* Weight an evangelist carries with God = 1 billigram

* Time it takes to sail 220 yards at 1 nautical mile per hour = Knotfurlong

* 16.5 feet in the Twilight Zone = 1 Rod Serling

* Half of a large intestine = 1 semicolon

* 1,000,000 aches = 1 megahurtz * Basic unit of laryngitis = 1 hoarsepower

* Shortest distance between two jokes = 1 straight line

* 453.6 graham crackers = 1 pound cake

* 1 million-million microphones = 1 megaphone * 2 million bicycles = 2 megacycles

* 365.25 days = 1 unicycle

* 2000 mockingbirds = 2 kilomockingbirds

* 52 cards = 1 decacards

* 1 kilogram of falling figs = 1 FigNewton

* 1,000 milliliters of wet socks = 1 literhosen

* 1 millionth of a fish = 1 microfiche

* 1 trillion pins = 1 terrapin

* 10 rations = 1 decoration

* 100 rations = 1 C-ration

* 2 monograms = 1 diagram

* 4 nickels = 1 paradigms

* 2.4 statute miles of intravenous surgical tubing at Yale University Hospital = 1 IV League and…

* 100 Senators = Not 1 good decision