Pomace wine

grape103

Making wine from fruit is very easy, usually much easier than making alcohol from grain.  In a previous post about yeast  it is proposed that early man discovered by almost unavoidable circumstance how to make this alcoholic beverage.  Although the basic process of winemaking is simple, making a consistent product from batch to batch or from year to year is more difficult and requires some science.  The physiological ripeness of grapes or other fruit, the effect of differing yeast strains and the development of tannins as wine ages can become complex subjects indeed.  This post attempts to brush by the more subtle aspects of winemaking, but still show the uninitiated novice that making a good wine can be a simple and rewarding task.  What will be referred to here as a “pomace wine” process seems to work well for white wine grapes and other fruit like peaches, plumbs and apricots.

“Must” is freshly pressed fruit juice which contains particles of skins, pulp, seeds and stems.  These solids in must are referred to as ‘pomace’.  The length of time that the winemaker might allow pomace to remain combined in the must can have a large influence in the final character of a wine.   The pigment and tannin content of a wine will be increased if the pomace is allowed to remain throughout primary fermentation.

This alternative pomace process differs from the more common practice of just squeezing and separating the juice from the pulp before beginning fermentation.   While grapes are used in this example the method probably even more applicable to wines made from most any other type of fruit.  The advantages of this pomace wine method might become self evident in terms of labor efficiency, by more desirable color and flavor in the final product and by the conversion of more sugars into alcohol.   After fermentation the wine is normally separated from the pomace by “racking” or siphoning only the clear wine from one container to another.   The leftover pomace will be rich in ethanol.  Water might be added to this residual pomace to make a second batch of wine or these wet solids might be distilled to create a “poor man’s pomace brandy” like Grappa.  If the distillation is added back to the clarified wine then a “fortified wine” (like Sherry, Port or Madeira) is created.

Grapes are easy

Yeasts thrive in a slightly acidic environment.  For wine the ideal acidity is about 0.6% which is roughly equivalent to pH 3.5.   Grapes generally come with close to ideal acidity for purposes of winemaking.  There are thousands of varieties of grapes and most will range between pH 2.80 to pH 3.84.   Fruits in general tend to be more acidic than vegetables.  Less acidic fruits like bananas and coconuts however would need to be amended with a little tartaric or citric acid prior to fermentation.  Acidity also comes into play later during the clarification of a wine.  Cloudiness in a wine is the result of suspended, electrically charged proteins & polyphenols.   To clear haziness in a wine, periodic racking, filtration and ‘fining’ or ‘clarifying agents’ can be employed.   This potentially complicated topic will be approached a little later.

Aside from having a low pH, grapes have a high monosaccharide sugar concentration.  Grapes have an abundance of easily accessible glucose & fructose which allow the ‘sugar loving yeast’ Saccharomyces cerevisiae to quickly flourish and perform its magic.  By contrast a grain wort has complex sugars or starches which require a “cracking” into monosaccharide form, before production of ethanol can commence.

1 Wash

grape104

In the above photograph the grape clusters are dunked in a mild Clorox (bleach / calcium hypochlorite) bath, next in a disinfecting sodium bisulfite solution and finally a rinse of pure water.  This process rids the grape clusters from most insects, arachnids, bacteria and wild yeast.   Finally the grapes were separated from the stems.

2 Process

grape104 (1)

Next the grapes were juiced in a food processor.   Some sources will discourage the thought of processing grapes in a blender, for fear of releasing undesirable tannins from crushed stems and seeds.  In this case however the stems were tediously removed beforehand and there is actually little probability of cracking individual seeds when the blending is done briefly and cautiously – just enough to liquefy the pulp.  Carefully controlled pressure would need to be applied in a commercial wine presses also – to avoid crushing the seeds.

grape104 (2)

Some winemakers might pour the must into a bag of cheesecloth to facilitate the easy removal of the pomace later.  Here though, the juiced pulp was simply poured into a sterilized fermentation bucket.  After the fermentation bucket was almost full then ¼ teaspoon of sulfur dioxide was mixed into the pulp and the lidded and rag covered fermentation carboy left to sit for 24 hours.  This kills remaining bacteria and wild yeast, some of which reside naturally inside the fruit.  It is important not to completely fill the fermentation bucket.   Leave an airspace of 2 or 3 inches at the top to reduce the possibility of an overflow during fermentation.   Also fermentation buckets like this have 6 U.S. gallons capacity; the excess volume is usually needed to fill a 5 gal glass carboy after a racking or transfer that leaves unclear sediments behind.

3 Oxygenate 

DSCF0122cc

After the 24 hour waiting period the sulfur dioxide will have dissipated, being consumed by killing bacteria, trapping oxygen and reacting with aldehydes.   In the picture above the must has separated into sugar rich juice at the bottom and lighter pomace at the top.

4 Inoculation

Almost any type of yeast can be used but the choice will dictate the flavor profile of the wine.   Here a Canadian yeast known as ‘Lalvin  71B-1122’ was used although there are several other fine brands of commercial wine yeast to choose from.  While a Champagne yeast would produce more alcohol, this strain was picked because of its lower alcohol tolerance (about 14%).  By not consuming all the sugar from the grapes this yeast is expected to create a less dry and softer wine and to preserve or enhance the fruit flavor and add fruity esters.

DSCF0126cc

Normally one could just sprinkle the yeast package over the must and stir it in, where with luck wine will be produced in about a week.  In this case however a yeast starter was created and used.   Creating a so-called ‘yeast starter’ is simply a means of ‘proving the yeast’ and of insuring a vigorous fermentation.   A couple of cups of juice were scooped out and the yeast added to that.  In a glass quart jar covered with a paper towel to allow oxygen to pass but protect against the introduction of airborne bacteria and wild yeast, with sugars to feed on the number of yeast in the starter can be expected to double every 3 hours.  With the yeast, 3 tsp. nutrient and 2.5 tsp. pectic enzyme were added to the starter solution at the same time in this instance.

DSCF0115cc

* Pectic enzyme or pectinase breaks down the complex and stubborn polysaccharides (long chained sugars) found in pulp and skins. Pectic enzymes can also improve fining and filtering operations of high-pectin wines.

* Pectin is the jelly-like matrix which helps cement plant cells together.  It is a structural polysaccharide contained in the primary cell walls of plants.  Fruit ripens and becomes softer as the enzymes pectinase and pectinesterase break pectin down.  Pectin acts as a soluble dietary fiber which traps carbohydrates and binds to cholesterol in the gastrointestinal tract. Pectin separated and concentrated from citrus fruit is used as a gelling agent in jams and jellies.

* Yeast nutrient provides the vitamins, amino acids, nitrogen, potassium and phosphorus that yeast cells need to grow well.   Contents of packages labeled “Yeast Nutrient” may include: dead yeast, folic acid, niacin, diammonium phosphate, calcium pantothenate, magnesium sulphate and thiamine hydrochloride.   Homemade nutrient might be made from ammonium or potassium sulphate and ammonium or potassium phosphate plus a few vitamin B1 pills.   Plain un-sulfured molasses is full of vitamins and minerals.  In laboratories a drop of molasses water is commonly added to cultures in Petri- dishes; to stimulate yeast growth and reproduction.

While sodium bisulfite powder was used both as a sterilizing agent and as source of sulfur dioxide for wine in this instance,  Campden tablets are perhaps more popular.  Potassium or sodium metabisulfite Campden tablets are also used as an anti-oxidizing agent or to remove chlorine from water.  What Canpden tablets can and can’t do

DSCF0132cc

By no means is it necessary for a winemaking novice to purchase or use a hydrometer.   The use of one though offers the brewer a little more understanding and control over the process of fermentation.   Hydrometers measure the specific gravity of liquids and different versions can be found to measure the amount of cream in milk, sugar in water, alcohol in liquor, water in urine, antifreeze in car coolant or sulfuric acid in a car battery.  Simply put for winemaking purposes here:  water containing sugar is denser than pure water and pure water is denser than ethanol. In the picture above: pure water in the beaker should read 1.000 but the fresh grape juice in the image reads a denser specific gravity of about 1.070.   This reading indicates a potential alcohol by volume (ABV) between 9 and 10% once the sugars are consumed by fermentation.  As fermentation commences the hydrometer will appear to sink in each sample, eventually reading less than the density of pure water.

5 Fermentation

Yeast cells reproduce in an aerobic (with oxygen) environment but create ethanol in an anaerobic (without oxygen) environment.   In this instance the fermentation bucket was lidded but allowed to breathe for another 24 hours before an S-shaped bubble airlock was fitted to the bung-hole.   Within 5-7 days about 70% or ¾ of the fermentation should be accomplished.   At this point (or when the specific gravity reads between 0.990 and 0.998) the young wine should be transferred to another container, leaving the pomace and sediments behind.   Either fresh water or additional fruit juice (if extra was acquired and refrigerated) should probably be added to the secondary container fill it.  This step is intended to reduce oxidization by limiting the amount of oxygen in contact with the wine.   Adding water to wine weakens it however while adding new juice might require the addition of more sulfides (which would stun the yeast).  The wine should be allowed to rest in the secondary for another 4 to 6 weeks or until it becomes clear.  At this point the wine can be bottled.

Advanced topics

Sulfides are added to wine at the time of bottling to keep it from spoiling or turning to vinegar later.   You don’t want to add too much sulfide to your wine however because it has an obvious smell and taste.  Some people have allergic reactions to sulfides but in general, health concerns regarding sulfide levels in wine are undecided.  The following link discusses how to accurately judge the proper sulfide level.  “ Should I add Campden tablets each time I rack my wine and how do I measure the level of sulfite in my wine?

This link can be ignored by the winemaking beginner but it is a good source of information.  The root url (winemaking.jackkeller.net) leads to a fairly through homepage dedicated to winemaking.   Winemaking Additives and Cleansers

White wines will generally clarify sooner than red wines.  Racking is the preferred method for clarifying wine but when haziness in the wine persist ‘fining’ or ‘clarifying agents’ can be employed.   Sparkalloid, Isinglas, egg albumen and gelatin are examples of positively charged finings whereas Bentonite and Kieselsol are negatively charged.   This link  provides more information about fining agents.

————

In conclusion, making wine with the pomace rather than without it is an alternative method which can offer several advantages.   Firstly this method does not require a grape press or an antique food mill or grinder.  This process also offers options for modifying a wine’s flavor and color profile which would not be available by the press method.   The pomace once separated from the wine can be re-hydrated to make a second wine or the intrepid individual might choose to produce a fortified wine or pomace brandy by utilizing these normally discarded solids.

 

 

Antennas (simple radio #2)

* Note to self:  The time for a new post is long overdue but it is not as though I haven’t had other distractions to keep me occupied.  Last week for example I had to chase the same bear out of camp three separate times during the night.  The next morning it was determined that the bear had confiscated a roll of sausage, a stick of butter, a box of cookies and a bag of marshmallows.

VLF05e

Generally, any antenna that is used to receive RF (Radio Frequency modulation) is capable of adequately transmitting that same RF.   Sprouting from the Italian word for the longest or central tent pole supporting a tent, “antenna” entered radio vernacular sometime after 1895 when Marconi (camping in the Alps) supported his radio’s aerial from the pole.   Aerial and antenna are usually synonymous and both are simply transducers or implements which convert one type of energy into another.   The word “aerial” however is sometimes used to refer to only a rigid vertical transducer.

* Antennae is a seldom used plural form of the noun – antenna, and might most frequently be encountered when discussing bugs.  Depending upon the type of insect, antennae might be used to feel, hear, smell, or even to detect light.  Apparently male mosquitoes employ their antennae to hear female mosquitoes from as far as ¼ mile (400m) away.

Radio antennas are thought of as being directional or omni-directional.   A directional antenna will prefer to radiate in, or receive from one direction more than it will in any other.   A vertical rod or isotropic radio tower supposedly radiates in all directions equally.  No aerial is perfectly isotropic (omni-directional) however.   In the case of a vertical tower there is a blind cone or null lobe straight up and another straight down where radiation is not sent or where reception is absent.   In the same fashion, there is no antenna that is perfectly directional.  A pictorial depiction of a directional antenna’s radiation pattern usually shows particular zones as being elongated lobes.  There are main lobes, back lobes, side lobes and null lobes of radiation pattern.

  Gain is a concept unique to directional antennas and is a measure of efficiency.   Gain is the ratio of a directional antenna’s intensity relative to that of a hypothetically ideal isotropic antenna.  A low-gain antenna sends or receives signals partially from several directions while a high-gain antenna is much more focused.   Both types have their advantages.   A high-gain antenna may need to be carefully aimed or pointed towards its target, to work.  That achieved, a high-gain antenna has a longer range than a low-gain type.   It’s a “conservation of energy”; less energy is wasted by radiating in useless directions.   Modern household satellite dishes for TV reception are examples of high-gain antennas.   Antennas on cell phones and Wi-Fi equipped computers however are low-gain types, which enables them to receive signals from many directions.

parabolic3e

The parabolic shaped antennas used for satellite TV and radars, are usually associated with microwave frequencies.   The first parabolic antennas were constructed however, over 120 years ago when Heinrich Hertz used them to prove the existence of electromagnetic waves.   The dish or parabolic shaped element can be made of mesh, wire screen, sheet metal or mirror.   The dish is only a passive device; a reflector that collects signals and bounces them towards the active (cable connected) feed.   Monstrously huge parabolic antennas are used for radio telescopes.   Radio telescopes can be used to determine the composition of molecular clouds in space because when excited, individual molecules rotate at discreet speeds and emit radio energy as they do so.   Carbon monoxide likes to emit at 230 GHz for example.   These telescopes can be used to study all sorts of things:  black holes, radio-emitting stars, radio galaxies, quasars, pulsars, gamma ray burst, super novas and so on.   They can be used to track satellites, do atmospheric studies or to receive radio communications from distant traveling spacecraft like Voyager 2.

*  The VLA (Very Large Array) radio astronomy observatory is located in a remote area of N.M., just east of Pie Town, N.M.  The array is made of 27 independent parabolic dishes that stand about 10 stories high (82’or 25m) and are visible from space as little white dots.   Each independent dish weighs 209 metric tons (2,205 lbs x 209) and is mounted on a robust rail system (doubled – two parallel sets of standard gauge tracks) so that it can be moved.  The rails are configured in a “Y” shape.  To focus on an object or area in space the 27 dishes expand from a minimum of 600m at center to a maximum baseline radius of 22.3 miles.  These antennas can listen to a large chunk of the radio spectrum (from 74 MHz to 50 GHz / wavelengths 400 cm to 0.7 cm).  Computers are used to correlate the data from each dish into a single map; the VLA observatory itself is called an “interferometer”.  Occasionally the VLA is brought online to link with other radio telescopes around the country to form an even larger (5,351 miles long) baseline called the VLBA (Very Long Baseline Array).  These other antennas are located in Brewster, WA, Kitt Peak, AZ, Los Alamos, N.M, Owens Valley, CA, Fort Davis, TX, North Liberty, IO, Hancock, N.H, Mauna Kea, HI, and St. Croix, U.S. Virgin Islands.  On occasions when radio telescopes in Arecibo, Puerto Rico, Green Bank VA, and Effelsberg, Germany join in the whole affair is called the High-Sensitivity Array.  

array2f

Phased array radar antennas like the flat panel above actually house many small evenly spaced aerials.  The phase of the signal to each individual aerial is logically controlled, resulting in a collective beam from all the little aerials that can be amplified and focused in a specific direction almost instantly.   Quicker and more versatile than mechanically rotating antennas because they require no movement, phased arrays are also more reliable and require little maintenance.   Limited phased array radars have been around for 60 years but recent improvements and affordability in electronics has made them more commonplace.   Most new military radars being built today are phase array systems.   

* RADAR is an acronym coined during WWII by the U.S. Navy, from “Radio Detection And Ranging”.  Before that however, the British were calling the same thing RDF (Range and Direction Finding).  The most common bands used for radar are microwave bands (at the upper end of the radio spectrum between 1 GHZ and 100 GHz – the L, F, C, X, Ku, K  and Ka bands).  Radars used for very long-range surveillance however might use longer VHF frequencies starting at 50 MHz or UHF frequencies between 300 and 1,000 MHz (1 GHz).  

6antenna3c

Omitting the simple aerial, some commonly encountered antenna shapes are shown above.  The most basic antenna type perhaps is a “quarter wave vertical”   (where the length of the aerial is ¼ of the wavelength targeted).   The simplest and most commonly encountered antenna however is probably the “dipole” antenna.   A dipole antenna is essentially just two elevated wires, pointing in opposite directions.   A dipole is fairly omni-directional unless its axis is parallel to the target emission.  A monopole antenna is formed when one side or one half of a dipole is replaced with a ground pane that is perpendicular or at a right angle to the remaining half.   A whip antenna correctly installed on a car for example, uses reflected radiation from the automobile’s body (the ground plane) to mimic a dipole.  In this instance the monopole will have a greater directive gain and a lower input resistance.

Grounding provides a reference point from which changes in waveform can be detected.  A radio tower that is constructed to transmit at AM frequencies for example must be grounded or be compensated for lack of ground, and its height or length of element is determined by the wavelength.  Certain ground soils allow good grounding to earth but others do not.  In the absence of a good ground an antenna can simulate a ground by adding drooping radials (additional elements hanging at 45°).  A typical Marconi antenna is a perpendicular ¼ wave aerial with a proper ground (perhaps the soil is moist, marshy, full of iron ore or otherwise conductive).  In this case the ground acts to provide more signal, adding the missing quarter to mimic a full half wavelength antenna.   Often two or more quarter wave antenna towers will be seen in the same vicinity.  Usually a group of similar towers like this is creating a directional array that transmits greater power in a certain direction.  Since AM broadcast (US.) wavelengths range between 1,826 ft. and 909 ft. in length it would be prohibitively expensive to erect a desirable full length or even half length vertical transmitting tower to hold up the element.  For economic reasons some large transmitting antennas therefore are laid out and polarized in the horizontal plane. 

The folded dipole is a variation of the simple dipole.  Folded dipoles are about the same overall length as a standard dipole but provide greater bandwidth, have higher impedance and can often provide a stronger signal.

  Loop antennas are generally used to conserve space.  The old TV set top “rabbit ears” often incorporated a loop in addition to the two telescoping, adjustable dipole elements.  Loops respond to the magnetic field of a radio wave, not the electrical.  A loop induces very small currents on each side of the loop and the difference between the two must be amplified usually, before any useful signal is fed to the receiver.   Loop antennas are very inefficient.  One useful property of the loop however is that is very directional, they pick up signals when positioned in one axis, but not another.  Most direction finding radios incorporate a loop antenna.   A loop by itself can determine the axis of a signal’s radiation but not forward from backward.   Direction finding radios were/are used in aircraft and boats or ships at sea to navigate with.  Modern civilian aircraft usually have an ADF (Automatic Direction Finder) box that is attached to a loop and sensing antenna combination.  In earlier days the loop was manual (turned by hand) and not automatic.  The non-directional, sensing aerial on a small aircraft might be a simple wire running from the tail, forward to the cabin.   The ADF’s electronics compares the two antennas (directional and omni-directional) to determine the signal’s phase (+/-) and therefore forwards from backwards.

Loopstick antennas (using ferrite rods) found in many small AM radios are actually examples of loop antennas.  Today “DX-ers” and radio hams might construct a shielded loop antenna, wrapping hundreds if feet of wire onto a spool.  Such an antenna would have the advantage of containing a half-wave or even a full-wave element in a small space, but it would be directional and introduce a new set of technical complications.

The Yagi- Uda antenna was invented by two Japanese scientists back in the late1920’s.  Early airborne radar sets used in WWII night fighters used Yagi antennas and were employed by almost everyone except the Japanese.  Yagi antennas have several parallel elements, some active (directors) and some not (reflectors).  The unconnected multiple elements help to improve gain and directivity.  The illustration shows a horizontally polarized, dual band antenna, once popular for analogue TV reception.  The whole thing is a combination of three separate Yagi antennas.  The longer elements are for VHF reception.  The shorter, closely spaced elements on the left half of the antenna were for UHF reception.  The shortest elements on the straight tail are directors and reflectors that act to improve the UHF gain and directivity.  The next longest elements (mounted on the vertical “V”) are UHF half-wave dipoles.  The longest elements on the right would be half wave dipoles, arranged in a “phased array” to pick up multiple channels.  Wavelengths of the FM and VHF TV bands are somewhere between 11’ and 9’ long.  The longest single element in this example would be about 5.5ft.

* Beware of salesmen selling snake oil.  There is no such thing as a digital TV antenna.  An antenna does not care how the wave is modulated; it does not distinguish between analogue and digital signals.  

* Although as of 2009 UHF TV is gone in the US., someone else will now transmit in those UHF bands (probably AT&T or Verizon).  The front half of these old antennas are still good useful for FM and HDTV reception if a local broadcaster is still transmitting on his legacy bandwidth.  The FCC is eager to grab this bandwidth and sell it to cell phone companies.  

Horn shaped antennas are commonly used at UHF and microwave frequencies.   Parabolic antennas (where the dish itself is just a reflector) often use a horn as the ‘feeder’.   Advantages of horn antennas include simplicity, broad bandwidth, fair directivity and efficient standing wave ratios.  A few large horn antennas were built in the 1960’s to communicate with early satellites or for use as radio telescopes.

Small antennae

rfid4np2

Radio-Frequency Identification (RFID) tags are growing alarmingly in popularity and in sophistication.  This unregulated and potentially invasive technology broadcast identification and tracking information by using radio waves.  RFID tags generally come in three types these days:  active, passive and battery assisted passive.  New technology has enabled the miniaturization of these devices to a point where individual ants can host their own personal transmitter.  Many pets and livestock are either internally or externally tagged with RFID chips.  At least one version of a subdermal microchip implant (RFID transponder encased in silicate glass) about the size of a grain of rice (11mm x 1mm) was manufactured for use in humans until the year 2010.

A passive RFID tag requires an external electromagnetic stimulus before it can modulate its radio signal.   An active tag carries its own little battery and therefore transmits its signal autonomously.  A biologist might harness some animal like a sea turtle or wolf with this type of tag, and it would only broadcast for a limited time but for a greater distance.   A battery assisted passive (BAP / or semi-active) RFID tag sets dormant until stimulated, and its battery helps boost the range of the tag’s radio signal.

Even a simple, cheap passive RFID tag can hold up to 2 Kb of memory.  These contraptions use a simple LC tank circuit (a resonating inductor and capacitor).  Their antennas are designed to resonate within a certain radio spectrum.  Usually a RFID transponder resonates anywhere between 1.75MHz and 9.5 MHz – with 8.2 MHz being the most popular frequency.   Usually RFID chips work within traditional ISM (Industrial, Scientific and Medical) frequencies set aside for non-communications purposes.    ISM occupies reserved niches in the LF, HF, UHF and microwave frequencies that RFID tags can and do exploit, often without the need for a license.  The chip’s antenna picks up electromagnetic radiation from a reader or detector; converts that to electrical energy which powers the microchip which then reflects or broadcast any information held in memory-back over the same antenna.

* Passive tags, when used for electronic article surveillance are usually deactivated by frying the capacitor with an overload of voltage which is induced from a strong electromagnet at the checkout counter.  Also a few seconds inside a microwave oven will destroy most RFID chips.  Many retail items are “source tagged” at the point of manufacture, with the RFID device hidden within the packaging.  Since every vendor does not employ the same type of EAS system (or perhaps none at all) alarms can go off when customers carry or wear these still activated tags into other stores.  Some stores may deliberately not deactivate these tags; the motive of building a customer shopping database has been suggested. 

rfid_6multi6

Big & rare

Up until 2010 when a certain skyscraper in Dubai was completed, the tallest manmade structure ever built was a half-wave radio mast.   Standing at 646.38 m (2,120.6 ft) above the ground and perched upon 2 meters of electrical insulator, this tower broadcast longwave radio (@ 227 kHz and later 225 kHz) to all of Europe, North Africa and even to parts of North America.   It was used by Warsaw Radio-Television (Centrum Radiowo-Telewizyjne) from 1974 until it collapsed in 1991.

The notorious ‘Woodpecker’ radio signal interfered with the world wide commercial and amateur communications and international broadcasting stations for about 13 years.  Transmitting with about 10 megawatts of power from an antenna that was about 50 stories high and 1/3 rd of a mile long (150m tall x 500m wide) the original Duga-3 antenna was nicknamed “Woodpecker” for the interfering sound  that it made.   It was using protected frequencies set aside for civilian use.   Operating from 1976 to 1989 the Woodpecker now resides within a 30 kilometer diameter region of exclusion surrounding the Chernobyl power plant.  The Chernobyl disaster occurred in April 1986 but apparently the Woodpecker continued to operate for another three years.

Their has been varied speculation about the purpose of the Duga-3 broadcast, including intentional broadcast interference, mind control experiments and weather manipulation.   These speculations are not without precedent.   The most plausible explanation of the Woodpecker signal however, is that it was simply a Soviet over-the-horizon radar (OTH) intended to detect ICBM’s at long range by bouncing itself off the ionosphere.  Apparently the Woodpecker was arrayed with other OTH systems like Duga-2 (also in the Ukraine) and a second Duga-3 built in eastern Siberia which points toward the Pacific.

A couple of videos filmed at this antenna which should provide an appreciation for scope and scale.

Climbing up the Russian Woodpecker DUGA 3 Chernobyl-2 OTH radar

https://www.youtube.com/watch?v=YeLjJXvtmxo

Base jumpers sneaking into the ‘Zone of Alienation’ to jump from the antenna.

https://www.youtube.com/watch?v=CODnzRkvS44

 

* During the ‘Cold War’ the term “International broadcasting” described broadcast pointed at or intended for foreign audiences only.   For 60 years now, RFE/RL (Radio Free Europe (RFE) and Radio Liberty (RL)) have been spreading anti-communistic propaganda and psychological warfare behind the ‘iron curtain’ using shortwave, medium wave and FM frequencies.  It would stand to reason that the Soviets might have wished to retaliate or block such popular broadcast.   Although mind control by radio signal seems very far-fetched, the Soviets are accused of having for many years focused microwave radiations toward the U.S. embassy in Moscow.    Perhaps the Soviets were attempting to slowly cook the Americans.  A more feasible explanation is that the microwave energy was being used to stimulate passive covert “bugs” hidden within the embassy.  In 1952 such a covert listening device now known as a passive cavity resonator  was discovered inside the U.S. Ambassador’s Moscow residence. This infamous creation known as “The Thing”  was designed by the Russian engineer and physicist Lev Sergeyevich Termen  and preformed its espionage, unnoticed for 6 or 7 years.  

* Weather manipulation using radio is theoretically feasible and supporting information will be included shortly.

Extremely low frequency (ELF) is an electromagnetic radiation range with frequencies from 3 to 30 Hz and wavelengths between 100,000 to 10,000 kilometers (62,137 miles to 6,213 miles) long.   Since ELF frequencies can penetrate significant distances into the earth and seawater they have been used by the U.S., Soviet/Russian and Indian navies to communicate with submarines at sea.   The British and French apparently also apparently constructed and experimented with ELF antennas.   Because of the extreme wavelengths, sending antennas need to be very large and the few examples that do exist are buried in the ground.  ELF transmissions were or are limited to a very slow data transmission rate (just a few characters per minute) and are usually just one way transmissions due to the impracticality of a submarine being able to trail an aerial behind it which was long enough to send a reply.   The U.S. Navy transmitted ELF signals between 1985 and 2004 from one antenna located in the fields of Wisconsin and another located in Michigan.   Due to environmental impact concerns involving everything from farmers concerned over their livestock’s behavior to disoriented whales beaching themselves en masse, the U.S. Navy abandoned its ELF effort.  They use something better now anyway.

* Miners and spelunkers can use technology called through-the-earth communications which utilizes the (higher than ELF) ultra-low frequency (ULF) range between 300–3,000 Hz.  

Plasma is conductive, ionized air or gas.  Using an array of antennas attached to powerful radio transmitters ionospheric heaters are used study and modify plasma turbulence and to affect the ionosphere.   Several of these ionosphere research facilities already exist (in Norway, Russia, Alaska, Japan and Puerto Rico) and are operated organizations like SPEAR (Space Plasma Exploration by Active Radar), EISCAT (European Incoherent Scatter Scientific Association) and HAARP (High-frequency Active Auroral Research Program).   By heating or exciting an area of the ionosphere, air can be made to rise or to act as a reflector from which other radio transmissions can be bounced.  Theoretically then ionospheric research could, should or already does allow for enhanced radio communications, surveillance, long distance communications with submarines, weather modification and perhaps eventually even the transport of natural gas from the artic without the use of pipelines.  The feasibility of altering the course of the jet stream or of steering the course of a hurricane seems very real.  Readers wishing to learn more about this subject can find some information on the Internet.   They could start by following these two links:

Ionospheric Heaters Around the Globe – HAARP isn’t Lonely

Weather Warfare

 

Knots

Nomenclature in the world of knots is inconsistent in any language.  Within English some would stipulate that the tangles of cordage we commonly call knots should actually refer to only those things that are neither bends nor hitches.   Ideally a bend should join two ropes or lines together, whereas a hitch should attach a line to a post, ring, rail or something.  In general however, the term knot is used to encompass all three.

terms2b

Some fundamental knot component terms include “working or tag end”, “standing line”, bight and loop.  In a bight the end and the standing line are parallel but in a loop the working end crosses over the standing part.  Other knot terminology might include: braids, bindings, coils, dog, elbow, friction hitch, lashing, lanyard, locking tuck, messenger, nip, noose, round turn, plait, seizing, sling, splice, stopper, trick or whipping.  A knot that has a draw loop is said to be a slipped knot, which is not the same thing as a proper slip knot.  When tying shoelaces for example two draw loops or bights finish the knot and provide easy untying.

DSCF0729c

The simplest knot of all is the “Overhand knot”.  Once tied in a line of rope or cordage, every knot reduces the static tensile strength or average breaking strength of that line, when tension is applied.  The proportion of knotted cordage’s breaking strength relative to its unknotted strength describes a given knot’s “efficiency“.  Efficiency is about the only common, measurable, descriptive term shared between knots, bends and hitches.  Most knots have an efficiency between 40% and 80%.  The overhand knot (ABoK#514) has an efficiency rating of 50%, which is poor because when stressed it reduces the strength of a line by half.

Several knots we are familiar with are ancient.  Long ago prehistoric fishermen were using knots to make gill, casting and trawling nets. In addition to practical knots, the ancient Tibetans, Chinese and Celts contemplated some very intricate and elaborate decorative knots.

There is by no means an authoritative categorization or listing of all knots.  Growing in acceptance, the closest thing to an authoritative list of working knots might be Clifford W. Ashley’s illustrated encyclopedia of knots.   First published in 1944, The Ashley Book of Knots list and numbers more than 3,800 basic knots, but this does not even come close to enumerating all the variants and ornamentals in existence.  There is a lively online forum on almost every subject related to knots – hosted by the International Guild of Knot Tyers.  Also there is a quick and handy online knot index which features images for some of the more common working knots.

Dowker-notation-exampleb

* A tangential detour: Knot Theory

Lest the reader assume that knots are an overly simplistic or entirely trivial subject they should realize that the future advancement of computing may rely upon an underlying study of knots.  The speed of the fastest computers is approaching a limit due to the finite speed of the electron itself.  Any increased computing speed in the future may depend upon quantum field theory and statistical mechanics; mathematics that sprouted from a topology known as “knot theory” or the mathematical study of knots.  Knot theory is often applied in geometry, physics and chemistry. Topology is concerned with those properties that don’t change when an object is continuously stretched, twisted or deformed.  Topology involves set theory, geometry, dimension, space and transformation.  Topology studies spatial objects (objects that occupy space), the space-time of general relativity, knots, fractals and manifolds.  A mathematical knot is one where the ends are joined together to prevent it from becoming undone.  Inspired by real world knots, the founders of knot theory were concerned with knot description and complexity.  They created tables of knots and links (knots of several components entangled together).  Over 6,000,000,000 knots have been tabulated to date and obviously concise tabulation would be a task for a machine and not a human.

littles_knotsb

free to use or share filter

 

A FEW GOOD KNOTS

A surprising number of people are unfamiliar with or cannot tie a decent knot, when such a skill can occasionally prove to be quite handy.  A repertoire of only a dozen or so well chosen knots will stand the survivalist or Boy Scout in good stead with his contemporaries.  An effective working knot should have practical applications, it should be simple to tie and easy to remember and in most instances it should be easy to untie.  My subjective list of six of the most important and effective working knots include the slipped -slipknot, bowline, figure -8 (or Figure of Eight Loop), clove hitch, prusik knot and the trucker’s hitch.   The clove hitch and prusik knots are fundamental in that several useful variations have been built upon them.

2slips

The simple slipknot tightens as the hauling end is pulled and can become very tight and difficult to untie.  By “slipping” the knot with a bight or draw loop however, even the tightened knot will fall apart after a stout yank of the tag end.  This simple knot is appropriate in many applications including tying a hammock to a tree or fastening a horse halter to a post or rail so that it can be unfastened quickly in an emergency.

slip7

Many knots including the venerable bowline can be “slipped” in such a fashion.  For those people who encounter a mental block when trying to remember how to tie a bowline, there is an easily remembered right-hand–twist method to use.

bowline_simp3c

There are many instances when a loop in the middle of a line is called for.  As an example, for safety a mountain climber might tie himself to a middleman’s knot in the center of a climbing rope.  While a simple overhand loop might suffice in this application – it could become difficult to untie after being stressed.  The addition of another twist to the overhand loop results in the so-called Figure of Eight loop which is probably more efficient and much easier to untie.  Some might consider the Figure of Eight loop (or Flemish loop) preferable to comparable mountaineering knots like the Alpine Butterfly, merely because it is simpler and easier to remember.

4loopb

The granddaddy of all “ascending knots” or “friction hitches” is the venerable Prusik knot which was first created during WWI and named for its inventor.  The Prusik can be doubled (with 6 coils rather than 4) to produce more traction.  The younger Kleimheist also shown in the illustration below is also popular with modern day climbers.

prisik_kleim

Few good (simple) ascending knots for mountaineering can be tied with nylon webbing.  The Heddon and double Heddon knots shown next are exceptions that seem appropriate.

Heddon_double

The Trucker’s hitch is an important and utilitarian cinching knot that is actually a compound construction of two other knots.  Disregarding friction, the Trucker’s hitch can tightly strap down loads on trucks, trailers, boats and pack saddles because it applies a 2:1 mechanical advantage.  The standing line employs a ring, carabineer or middleman’s loop while the cinch is tightened with the tag end.  After the cinch is drawn tight the pressure is held by pinching the bight with one hand, before finishing with a simple slipped overhand knot.

truck_hitch

The finial knot (of the six most crucial selected here) is the excellent, general purpose ‘clove hitch’.  It is mentioned last because many admirable variations have been conceived from it, and illustrations of a few of those will follow.

Clove731r

Excellent for sacks and trash bags the “constrictor knot’ differs only slightly from the clove hitch, but holds more firmly.  It can be hard to untie unless intentionally slipped with a draw loop.

con2

rolling_mangus

When wrapped around a tent stake the “taut line hitch” below is useful for tensioning a tent guy line.  To the right of that is a useful clove hitch variant that has no recognized common name or ABoK number.  Tentatively referred to as the wireline hitch here, the grip of this variant is superior to the taut line version.

ywo_hitch

 

A few more knots _ deserving honorable mention

Strong and efficient the ‘Palomar knot’ is useful for attaching large hooks, lures or sinkers to a fishing line.

palomar1fff

The “Surgeon’s loop” is another simple and effective knot for attaching small lures or flies to a tiny mono-filament fishing line.  Knots like the surgeon and Palomar are cut away rather than untied after they serve their purpose.

surgeon1bffg

The “Ossel hitch” is an ancient knot; no one knows how old. It is or was a simple, secure and effective knot used to suspend gill nets from a larger line.  Strangely the ossel hitch is not recognized in Ashley’s encyclopedia.  This may be because “ossel” is a Scottish word and was not that familiar when Ashley illustrated his book.  There is a similar but different knot in the encyclopedia known as the “Netline Knot” (ABoK #273) that hails from Cornwall on the southern coast of England.

osseb6

This simple Anchor Bend variant below is easily remembered and is much more secure than the parent knot.

ancor_bend

Finally, this old page construction below introduces a couple of utilitarian gripping hitches

pipe_hitch6c

 

KNOT CONUNDRUMS

This is a blog post and not an encyclopedia therefore most knots cannot be shown.  Returning to the off topic tangent of knot mathematics we come to a group of abstract ideas known as graph theory which foreshadowed or laid the foundation for topology.  The father of graph theory was a Swiss mathematician and physicist named Leonhard Euler.   Leonhard discussed a notable historical problem in mathematics called “The Seven Bridges of Konigsberg”.  The unsolvable problem was to walk through the city, crossing each bridge once and only once.  What is called Euler’s solution became the first theorem of planar graph theory.

konig_map2

* Back in 1735 the seven bridges of Konigsberg were real and that city was part of the Prussian Empire and bordered Poland on the Baltic. Konigsberg, Prussia became Kaliningrad, Russia (54°42’12” N, 20°30’56”E) sometime after WWI. After the breakup of the Soviet Union, Kaliningrad and surrounding province became physically separated from the rest of Russia. After another world war and the ravages of time only two of the original bridges from Euler’s time survive. Five bridges now connect the city and islands formed by the Pregel River.

A similar conundrum that Euler might have considered had he the chance is the hypothetical house with five rooms and sixteen doors. The object is for a person to walk through each door once, but one time only.

housew2b

Finally we come to the perplexing Mobius strip and Trefoil knot. The naughty Mobius strip is something of a paradox. The single edge of a Mobius strip is topologically equivalent to the circle and mathematically it is non-orientable.

mobius2

A physical Mobius strip can be constructed from a belt or strip of paper.  One simply grabs the two ends and gives one end a half twist before taping the two together in a loop.  The resulting surface then has only one side and one edge.  Imagine a miniature gravity defying car driving around the surface of the strip.  If the car began on the top side of the surface then its path after one revolution of the loop would place it on the bottom side of the surface.  Consider a bug dragging a paintbrush while walking along the right edge of the strip and making two revolutions of the loop.  We perceive two edges to the strip but realize there is only one.

M.C. Escher incorporated the Mobius strip in some of his graphical art.  In the real world recording tapes and typewriter ribbons have been spliced in the continuous-loop – Mobous strip fashion to double playing time or ink capacity.  Large conveyor belts have also been wrapped the same way, to increase belt life by doubling the surface area.  The Mobius strip has several curious properties.  A continuous line drawn down the middle of the loop will be twice as long as the same loop.  Cutting this paper loop down the centerline will produce one long loop with two twists (not two strips) and finally two edges.  Cutting this longer strip again as before, will produce two strips, each with two full twists and intertwined together.

trifoil4d

In topology the “unknot” is a circle and the “trefoil knot” is the simplest knot. Named after the plant that produces the three-leaf clover, the trefoil knot can be tied by joining together the two loose ends of a common overhand knot, but this results in a knotted loop.  Although it doesn’t look very convincing when done with paper, a trefoil knot can also be constructed by giving a band of paper three half twist before taping the ends and then dividing it lengthwise.

 

Solar energy at home

Most of the energy we earth bound humans consume comes directly from the sun, exceptions being atomic fission and some types of chemical reactions.  Fuel oil, coal and natural gas energy that civilizations use exist because of the Sun’s previous contribution in the formation of those hydrocarbons.  Wind currents are caused by the sun warming the air and as thermals rise they are displaced by denser, colder air.  Likewise the sun’s energy is ultimately responsible for distributing snow melts and rainwater water to higher elevations, which create the kinetic energy needed to power watermills and hydroelectric generators.  On a small personal scale, more individuals are learning to exploit the sun’s energy to heat their homes, generate their own power or to cook their food.  The two main methods of acquiring power from the sun are photovoltaic (PV) cells and thermal energy collectors.

Almost 53% of the energy in sunlight is absorbed or reflected before it even hits the surface of the earth.  The glazing or protective substrate in a solar collector can further diminish the amount of energy obtained.  Even the best solar panels can be considered to be inefficient.  The amount of energy collectible by a given solar panel is subject to many variables.  Whether talking about heat or electricity we generally measure that energy in units of Watt-hours (energy = power x time).  Under the best and brightest conditions a panel might collect as much as 2,000 Watts per sq. meter but under realistic or averaged conditions the expectation might only be half that.  During the daylight hours of a normal summer day at 40 degrees latitude, a solar collector would be doing good to average 600 Watts per sq. meter.  In wintertime for the same location the same collector might gather an average of only 300 Watts per sq. meter.  For any random location around the earth the average collectible solar energy per mean solar day (24 hours) is only about 164 Watts per square meter.

s_pannel

Overview of PV

In a photovoltaic solar cell an electrical charge is generated when photons excite the electrons in a semiconductor.  There are many types of solar cells and even some new developments in technology which will hopefully lead to the future manufacture of more affordable photovoltaic solar panels.  The warmer the photovoltaic solar panel gets the less power it can produce.  Essentially the temperature doesn’t affect the amount of solar energy a solar panel receives, but it does affect how much power you will get out of it.

The most common photovoltaic solar cells are made by chemically ‘doping’ a very thin wafer of otherwise pure monocrystalline (single-crystal) silicon.  In a delicate and complicated process of fabrication, wafers of silicone are generally cut or sliced as thinly as possible (before they crack) to a thickness of about 200-micrometers or the width of a typical moustache hair.  Since each individual solar cell produces only about 0.5V, several cells must be wired together to produce a useful photovoltaic array.  Mostly produced in China, commercial photovoltaic solar panels are very expensive, averaging $2 – $3 cost for every single watt they produce.  An average U.S. residence consumes something like 30.6 kWh per day, 920 kWh per month or 11,040 kWh /year.  In a country like the U.S. where grid power is comparatively cheap (averaging 10 cents per kWh in 2011) it would take a very long time for photovoltaic panels producing equivalent energy to pay for themselves.  In the meantime an individual with a “do it yourself” mentality can more directly utilize solar energy by fabricating his own contraptions to collect heat.

USgen3

Solar Ovens

Although it would not be considered a quick process, it is easy to cook food with direct sunlight.  Slow cooking oftentimes creates superior dishes with the best blend of flavors.  Some heat trap type solar ovens can easily produce temperatures over 250 deg F; sometimes up to 350 deg F.  No matter what type of oven is used however (electric, gas, solar, smoke pit or Dutch) a good cook knows that slow cooking with a modest heat over a long period, will make an otherwise tough piece of meat more tender.

s_oven1e

Essentially there are only two types of solar oven; those that entrap heat or those that reflect it.  To form a simple ‘heat trap’, a cardboard or wooden box can be insulated, spray painted black inside and then lidded with glass or clear plastic.   It helps when the cooking vessel itself is dark also – to better absorb solar heat.  In addition to being dark, it helps when pots are thin and shallow and have tight fitting lids.  Even glass mason jars make useful solar cooking utensils.  These can be spray painted black and the lids can be unscrewed a bit to allow vapor pressure to escape.   It might seem that parabolic or concave reflecting cookers would be complicated to construct, but some examples have been made by simply surfacing the inside of umbrellas or parasols with aluminum foil.  Mirrored Mylar or similar BoPET films are also useful materials in this type of application.  Doubtless many examples or ‘instructables’ detailing the construction of reflective type solar ovens, exist elsewhere on the Internet.  Some specially constructed reflective ovens claim to be able to reach temperatures of nearly 600 degree F.

The importance of cooking some foods, especially meats, is to kill bacteria.  Bacteria won’t grow below 41 deg F or survive above 140 deg F.  The internal temperature of meats needs to reach a range between 140 deg F and 165 deg F to be considered safe.  Seafood needs to be cooked to 145 deg F or hotter.  To rid poultry of salmonella, poultry must reach 165 deg F on the inside.  Egg dishes should reach the same temperature.  Trichinosis is halted by cooking pork to about 160 deg F.   Ground beef should reach 155 deg F for safety.

desert_still4

Solar stills

Back in the 1960’s a pair of PhD’s working in the soil hydrology laboratory for the USDA invented a solar evaporation still that could suck useful drinking water out of the ground.  Even in the arid desert around Tucson, Az. where they were located, they realized that the soil entrapped useful moisture.  Such a solar still is made by digging a pit in the ground, placing a collection pot in the bottom and covering the hole with a sheet of plastic.  Additional moisture could even be gathered by placing green vegetation under such a tarp.

It seems that the first evaporative solar stills were invented back in the 1870’s to create clean drinking water for a mining community as explained in an earlier post in this same blog named “The Nitrate Wars”.   This same distillation where moisture is evaporated before the condensation is collected, is employed in affordable, plastic-vinyl inflatable stills that can equip small boats and survival craft at sea.  Where once stranded fishermen and sailors faced a death by dehydration they now have the opportunity to create the drinking water they need from seawater.  Muddy or brackish germ infested groundwater can be reclaimed in the same way.

sea_still6

There are several possible techniques to employ and efficiency factors to consider when fabricating an evaporative solar still.  Obviously good direct sunlight is essential to their efficient functioning.  The ‘basin type” solar still is the most common type encountered and somewhat resembles a heat trap solar oven.  In a “tilted wick” solar still, moisture soaks into a coarse fabric like burlap and climbs the cloth before it eventually evaporates.  In higher latitudes ‘multiple tray’ tilted stills can be used, where the feed water cascades down a stairway of trays or shelves, allowing closer proximity to the glass and enabling steeper tilt angles for the panel to capture optimum sunlight.

wstille8

wstill_side

Other liquids besides drinking water can be refined in an evaporative solar still.  Ethanol can and has been concentrated from mashes, worts, musts or washes using a solar still.   Since a distiller usually desires more direct control over temperatures however, he might consider solar stills to be practical only for so-called “stripping runs”.   Some of the earliest perfumes were created from fragrances collected by distillation.   Soaking wood, bark, roots, flowers, leaves or seeds of some plants in water before distilling the mixture, is a common way of obtaining aromatic compounds or essential oils.   Not all plant fragrances should be distilled but eucalyptus, lavender, orange blossoms, peppermint and roses commonly are.   The lightest fractions or volatiles of petroleum (like gasoline) separate at temperatures available in solar stills, but the heavier ones will not.  Theoretically it should be possible to place slip or crude oil into a solar still to separate out the gasoline and higher fractions.

frac_towerg

Solar water & air heating

Most readers will have experienced how water trapped in a garden hose will get hot on a summer day.  Portable camp showers are simple black water bags, suspended at a little elevation and in direct sunlight to warm the water.

solar-camp-showerb

Where climatic conditions permit people may employ gravity fed or pump pressurized waterlines and tanks on rooftops or simply along the ground to achieve the same solar water heating effect.  Others may construct or install dedicated solar heating water panels to heat swimming pool water or to pre-heat water before it enters their home’s gas or electric water heating tank.

screen16h

The construction of a solar water heater and a solar air heater can be very similar in concept.  Basically air or water is conducted through pipes or conduits to a panel where the heat exchange takes place.  Copper pipe might be the most desirable material to use in a solar water panel because of its pressure holding ability, resistance to corrosion and longevity.  Thin walled pipes of cheaper metals can be used to adequately exchange or transfer heat to air that passes through them.  A growing fad in the construction of homemade air-heating solar panels is to build the collector with empty aluminum beer or soda cans.  The tops and bottoms of the cans are punched or drilled out and the cans are glued together to form a continuous airtight pipes.  The box that holds everything is well insulated (sides and bottom) every interior surface exposed to sunlight is spray painted a dark, sunlight absorbing color – preferably using a high quality, high temp, UV protected paint.  A transparent glazing (of glass, plastic, fiberglass, Mylar, acrylic, polycarbonate, etc.) is tightly sealed over the top of the trap.  A double or even triple layer of glazing is preferable to a single one to reduce the escape of thermal heat.  While beer and soda cans are popular because of their availability and affordability, equally efficient collectors could be made from tin cans (made of metal called tinplate), rain gutter downspouts, old aluminum irrigation pipes, single walled stove pipes or even from bug screen like you’d find on a window.  This site, chosen from many that discuss solar heating with air, suggest that bug screen collectors are on par with soda can collectors and are possibly easier to construct.

In the choice of fan or blower used to push or pull air through the system, it is preferable to circulate a large volume of modestly heated air rather than a small quantity of thoroughly heated air.  Ideally a solar panel can increase the heat of the air passing through it as much as 50 or 60 degrees F.   In this type of collector an optimum airflow rate of 3 CFM per square foot of absorber has been suggested.  In general the larger the solar air panel, the better – small ones are probably not worth considering.  They should be built with quality paints, glazing and other components where possible to resist corrosion and decomposition from sunlight and other climatic elements.

Pointing solar panels

Direction

For optimum efficiency any solar panel should face the sun at a perpendicular angle.  The position of the sun changes constantly however throughout the day.  Some institutions or uber rich people might purchase solar trackers which employ servo or stepper motors to keep photovoltaic panels aligned with the sun.  Such ‘trackers’ increase overall efficiency by increasing morning and afternoon light collection.  The rest of us however have to make do with permanently fixed or periodically adjustable panel mounts.  Normally the bases of fixed panels are aligned perpendicular to due (not magnetic) south.  Some owners of grid tied solar photovoltaic panels however are deciding to aim their panels towards the west.

Tilt

The effectiveness or efficiency of a given solar panel is definitely affected by its proper orientation to the sun but as the sun moves around a lot, solar panels that do not automatically track its movement must seek a positional compromise.  The sun’s apparent altitude in the sky changes throughout the year.  Because of the earth’s motion the sun’s altitude appears to vacillate 23.5 degrees between summer and winter solstices or every 6 months.  Solar panels near the equator can be positioned parallel with the horizon and largely remain efficient by just pointing straight up.  The further a location is from the equator the more vertical a panel’s ideal tilt becomes.  Above the 45th parallel, vertically fixed solar panels mounted to the side of a building can preform admirably in the wintertime.  There is no one perfect tilt angle with which to keep a solar panel perpendicular with the sun’s rays throughout the year.  This fact motivates some people with adjustable panel mounts to periodically climb up on their rooftops with wrench in hand to refine panel tilt.  Others might wish to install a solar panel permanently in the best year round average position and not worry about adjustments.

Older literature for solar panel installation might quote a rule of thumb where 15 degrees are added to latitude for wintertime panel tilt, or 15 degrees of angle are subtracted from latitude to acquire summertime panel tilt.  A more modern set of calculations being mimicked or repeated often around the web, suggest wintertime tilts that are a bit steeper than common to capitalize on midday rather than whole day solar gathering and flatter than normal summertime tilts favoring better whole day rather than midday collection.

-To calculate the best angle or tilt for winter:

(Lat * 0.89) + 24º = ______   (The latitude is multiplied by .89 and added to 24 degrees)

-The best angle for spring and fall:
(Lat * 0.92) – 2.3º = ______

-The best angle for summer:
(Lat * 0.92) – 24.3º = _____

-The best average tilt for year round service:
(Lat * 0.76) + 3.1º = _____

For the purpose of illustration a latitude of 35 degrees North will be chosen.   Locations somewhat close to this latitude include: the Straight of Gibraltar, Tunis Tunisia, Beirut Lebanon, Tehran Iran, Kabul Afghanistan, Seoul Korea, Tokyo Japan – and in America, cities along Interstate 40 or the old Route 66 (Raleigh NC, Memphis Tennessee, Fort Smith AR, Oklahoma City OK, Albuquerque NM, Flagstaff AZ and Bakersfield CA).

solar_tilt2b

solar_tilt2c

solar_tilt2d

 

A couple of sources for more information:

http://www.solar-facts-and-advice.com/

http://www.nrel.gov/rredc/solar_data.html

 

 

Metrification for the masses

*  When they weren’t lopping off every other person’s head in France during the revolution which began in 1789, reformers in that country seized the opportunity to make all kinds of other acute changes.  In 1791 for instance, the French Academy of Sciences was instructed to create a new system of measurements and units.  For two centuries now the rest of the world has been brow beaten and cajoled into adopting this sublime system of weights and measures and this process is called metrification.  While most nations have capitulated to the apparent intellectual supremacy or empirical advantages of the metric system, there are still some holdouts in the world.  After two centuries these non-metricated miscreants still drive the more rabid reformatory zealots of metrification, nuts.  Perhaps there are logical reasons in a few instances, not attached to loyalty or laziness, that compel these non-metric holdouts to hang onto some traditional weights and measures.

*  Feeling particularly erudite the reformatory French academics chose to base this metric system on natural values that were unchanging and reproducible, and to use numerical units based on the powers of ten.  Unchanging natural values were hard to coral back in 1791 so the official definitions of all the basic metric units have undergone several changes since then.  The metre is the most fundamental metric unit and from it the other units were originally derived.  American dictionaries, spell checkers and text books won’t even spell the word right.  Technically a “meter” is just a measuring device.  If you’re going to adopt French units you might as well swallow their spelling.   Like the non-metric nautical mile, the metre was originally conceived as being a portion of the earth’s circumference.

Nauticle_mile_def6

*  While the older nautical mile was defined as a minute (1/60th of a degree) of arc along a meridian of the Earth, the new metre was conceptualized as being 1/20,000,000th part of that same meridional distance.  Even before the oblatness of the earth was realized, French surveyors in the 1790’s determined a very fair approximation of what a metre should be.  Since that time the length of the metre has grown 0.2 mm longer.  Today most air and sea navigators still prefer to use non-metric nautical miles rather than kilometers because when using charts (nonlinear, 2-dimensional, mercator projections or maps) it makes life a lot easier.

*  It quickly became self evident that the intended international reproducibility of an accurate metre using the meridional definition was so impractical that a physical artifact had to be produced. In 1799 a platinum bar called the “mètre des Archives” was made and used as a copy reference.  In 1875 “Convention du Mètre” or Metre Convention was instituted to oversee the development of the metric system.  Conceived at the same time CGPM (“Conférence générale des poids et measures” or the General Conference on Weights and Measures) was established to democratically coordinate international participation by holding meetings every 4-6 years.  Broad acceptance of metrification did not really begin to take hold until after WWII and the creation of the European Union. SI or “Système International d’Unités” is today’s official name for the metric system as ordained by the CGPM in 1960.

Confusion and inconstancy

*  There are inconsistencies in the metric system.  The redefinitions of base units have been frequent.  The SI crowd has begrudgingly adopted non decimal units like seconds of time because they can produce no better alternative.  The SI intellectuals have regularly discouraged the use of seemingly compatible units and nomenclature simply because they themselves did not originally create or sanction them.  These same intellectuals have also adopted redundant and unnecessary units and nomenclature when simpler alternatives already existed.  Some unpopular and clumsy sounding SI units are floating around.

*  The currently approved MKS (metre, kilogramme, second) system of units supplanted the older CGS (centimeter, gram, second) system.  It was once simple to think of a gram in terms of the weight one cubic centimeter of water at the melting point of ice.  Although originally a base unit the litre (or liter) is no longer even an official SI unit!  The kilogram originally equaled the mass of a litre (1,000 cubic centimeters) of that same cold, pure water.  Obviously these definitions were not good enough because they no longer apply.  The kilogram is the only metric base unit that hasn’t been redefined in terms of unchanging natural phenomenon.  The authoritative kilogram is an object!  You can’t just produce an accurate kilogram in your laboratory located in Timbuktu.  In a dark vault somewhere in Paris sits a precious SI manufactured artifact.  Today’s official kilogram is a cylinder of 90% platinum and 10% iridium alloy.  Where once the metre was defined as one ten millionth of the distance between the North Pole and the Equator, it was eventually redefined as a multiple of a specific radiation wavelength.  Today’s official redefinition of the metre is as a fractional part of the distance traveled by light in a vacuum.

*  The concept of time and the replacement of legacy time units with suitable modernized counterparts have vexed zealous metric reformers for two centuries.  Years, months, weeks, days, hours, minutes and seconds are not decimally related.  We are gifted with impressive sounding terms like nanoseconds, kiloseconds or milliseconds but the ‘second of time interval’ is adopted by the metric system; it was not an original metric unit.  It was belatedly defined by SI as being: 1/86,400th of a mean solar day.  The metric second was later redefined in terms of astronomical observations.  Even later the second was redefined by the oscillations of a tuning fork, and then again by oscillations of a quartz crystal.  Today the SI second is officially defined as- “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom”.  Who knows how they’ll redefine the second, next year?

*  Whereas a regular non-metric Imperial ton (or short ton) weighs 2,000 lbs, a ‘long ton’ or ‘gross ton’ typically used in shipping cargo weighs 2,240lbs.  A “metric ton” or “tonne” weighs 1,000 kilograms or 2,204.6 lbs.  When appending the prefix “kilo” to ton, things start to get confusing.  In terms of explosive force a kiloton might mean the equivalent of 1,000 metric tons of TNT.  As a unit of weight or mass however a kiloton might mean either 2,000,000 lbs or the same as a kilotonne (2,204,622.6 lbs).  A gigagram would equal a kilotonne but that term is infrequently used.

* The seven current hallowed SI base units are the metre, kilogram, second, ampere, candela, mole and kelvin.  The Kelvin scale is an absolute thermometric scale but units are not referred to as degrees.   We normally call increments of temperature “degrees” because of a decision made way back in 1724 by a German physicist named D.G. Fahrenheit.   The Fahrenheit scale divides the range between the freezing and boiling points of water into 180 equal parts – like the degrees in geometry for half a circle.   D.G. Fahrenheit also invented the glass/mercury thermometer.   About two decades later but still well before the French reforms a Swedish astronomer named A. Celsius, borrowed Fahrenheit’s idea but divided the range by only 100 equal parts.  Originally Celsius’s scale ran backwards or counter intuitive to today’s usage but that situation was reversed after his death in 1744.   From 1744 to 1948 the units of what we now call the Celsius scale were better known as degrees of “centigrade“.   Eventually an Irish/British physicist named W.T. Kelvin comes along with further suggestions for improvement.   The Kelvin scale begins at absolute zero – there is nothing colder.   To make the Kelvin (K) scale fit in with the decimalized Celsius scale, the triple point of water (where gas, liquid, and solid phases of water coexist in thermodynamic equilibrium) had to be defined as exactly 273.16 K.   In other words the base ten loving SI / metric system uses for one of its base units, values derived from a very inconsistent fraction (1 / 273.16 – whose ungainly reciprocal is 0.003661).

Discouraged, clumsy, slang or unneeded

*  Created by a Swedish astrophysicist the tiny increment of length called an angstrom is exactly equivalent to 0.1 nanometre or 0.000,000,0001 metres.  Mention however of the non-Imperial and non-metric but internationally recognized angstrom is officially discouraged by the SI’s International Committee for Weights and Measures.  The small calorie itself was a pre-SI metric unit of energy defined in the 1820’s as the energy needed to raise 1 gram of water, by 1º C.  The dietary calorie (kilogram, large or food calorie) is 1,000 times larger.  The small calorie is obsolete or replaced in preference by the official SI “joule”.  Megbars, kilobars, bars, decibars, centibars and millibars of atmospheric pressure – are not SI units.  It takes 100,000 SI legitimate pascals to equal one bar.  One bar is roughly equivalent to one standard atmospheric pressure at sea level (14.69 psi or 101,325 pascals).  Meteorologists and weather reporters usually prefer to describe changes in air pressure in terms of millibars rather than in the exactly equivalent hectopascals; it just sounds better.  In oceanography during a decent from the surface, drop in metres and increased water pressure in decibars correspond nicely.

*  In an weak attempt to sound sophisticated a scientific journalist might employ the term “kiloannum” to impress his audience rather than use the simpler terms “millennia” or “a thousand years”.  Students might use the jargon “Fermi” rather than the more proper but awkward SI term femtometre to describe infinitesimal nuclear distances.  Attometres, zeptometres and yoctometres are smaller yet.  In astronomy where great distances are expressed one might seldom encounter the SI terms megametre, gigametre, terametre, petametre, exametre, zetametre or yottametre.  The most common vernacular one finds instead are the non-metric light year, parsec and astronomical units.  The “astronomical unit” (which is roughly the mean distance between earth and sun) was thought up in the 1970’s by the IAU (International Astronomical Union – also hosted by France) to patch up shortcomings in regular SI units caused when incorporating general relativity theory.  SI brings us gawky sounding terms like “gray” and “sievert”.  Theses terms were added to the dictionary not because they were necessary, but because they could be branded by SI authorities whereas “rad” and “rem” could not.  A gray is simply 1,000 times bigger than a rad and both units express energy radiated or absorbed.  A sievert is simply 100 times bigger that a rem and both units attempt to adjust radioactive dosages by accounting for type of tissue and type of radiation.

A Short Imperial unit background 

*  Maligned and criticized for still using Imperialistic old fashioned weights and measurements when the rest of the world does not, the American public has shown resistance to metrification.  Primarily a British colony in the beginning, America inherited British imperial units, which were in turn heavily influenced by historic French and even ancient Roman measurements and weights.  The avoirdupois system of weights that Americans favor was actually developed by the French.  The Troy weight system of units of mass still used in many locations around the world for quantifying precious commodities like gold, platinum, silver, gemstones and gunpowder – is also French (believed to be named for the French market town of Troyes in France).  Closely related to Troy weight, the apothecaries’ system of weights favored by physicians, apothecaries and early scientist has roots reaching all over central Europe and the Mediterranean.  The apothecaries’ system of weights was still being used by American physicians and pharmacist into the 1970’s.  After America separated from the British Empire the Americans kept the legacy units pretty much intact while the British did not.  Parliament by meddlesome act or decree and mostly for the purpose of increased taxation, continued to make small changes to certain units of mass and volume.  These changes caused much confusion between American and British (pre-metric) imperial units, which still exist today.

thogg

*  Without digressing too far from the subject of metrification: it should be explained that without the discrepancy between wine and beer cask and the British adoption (1824) and eventual retraction of the “stone” unit, that the impetus behind a one world metric system would never have been so great.  The legislated stone unit demanded a redefinition of several standard weights.  Today’s Imperial gallons, bushels and barrels are so screwed up because yesteryear’s hogsheads (drytight cask filled with wine, beer, liquor, whale oil, tobacco, sugar or molasses) were of different sizes.  A hogshead of wine has traditionally held more volume than a hogshead of beer.  In its defense Parliament did try to standardize hogshead volume back in 1423 but this had little effect.  Coopers at different locations made cask as they saw fit and eventually there became an accepted and even official difference in hogshead volumes depending on contents.  A multiplicity of different gallon, bushel and barrel definitions followed suit.  The UK Imperial gallon springs from the ale gallon but the U.S. liquid gallon is based upon the 1707, Queen Anne wine gallon.  Even today this curious distinction between wine and beer continues as the American BATF and Treasury Department require different labeling on the two beverages.  Wine and stronger spirits are labeled only in liters or milliliters while beer containers are labeled only in gallons, quarts, pints or ounces.

* The bushel used to be a measure of volume for grain, agricultural produce or other dry commodities.  Bushels are now most often used as units of mass or weight rather than of volume.  It should be realized that a bushel of each commodity in the mercantile exchange market is completely unique and different.  A bushel of corn weighs 56 lbs. but a bushel of soybeans or wheat weighs 60 lbs.  A bushel of plain barley weighs 48 lbs. but a bushel of malted barley weighs only 34 lbs.  A bushel of oats in the U.S. weighs 32 lbs. but across the border in Canada, it weighs 34 lbs.  Okra weighs 26 lbs. per bushel and Kentucky Blue grass seed only 14 lbs.  Many other commodities exist, whose specific values fluctuate according to the jurisdiction (country to country; state to state).  Pork bellies (the valuable bacon only) are traded by weight (one unit equals 20 tons of frozen, trimmed bellies).  The rest of the hog’s carcass in a commodities market is expressed as Lean Hog futures.  Refined oil might be shipped in 55 gallon drums (first created in WWII to ship liquids) but crude oil is measured and traded on the standard 42 U.S. gallon, historic common wooden barrels of yesteryear.  Barrels of other commodities often contain a volume of 31.5 U.S. gallons.

twelve6c

*  Where the Imperial system does not fail and probably needed no replacement is in its units of length, distance and area.  Imperial units of length were intuitively developed over the ages.  Metric units of length might be more easily abstracted numerically in calculations for pencil pushing types, but these are not nearly so instinctive for everyday usage.  Engineers and architects seldom have to build what they design; that labor falls to builders, millwrights, manufacturers, fabricators and others who work with real materials on a daily basis.

*  Consider the Imperial ruler or tape measure and its metric counterpart.  Working with fractions a fairly accurate Imperial ruler could be reconstructed by almost anyone given an empty room, a pencil, a pair of scissors and a strip or two of unmarked paper exactly one yard in length.  Feet, inches, half-inches, quarter-inches, eight-inches and perhaps sixteenth-inches could be adequately marked upon a blank yard long strip of paper.  In contrast it would quickly be realized that an adequate depiction of centimeters and millimeters could not be intuitively described upon a blank, metre long strip of paper.  There can be another eloquence in fractions.  Builders and fabricators familiar with feet and inches can often perform the type of mental arithmetic that would send their decimal loving metric counterparts scurrying for the nearest calculator or pencil.

league4cjpg

*  Americans find many customary units desirable and appropriate.  Non SI unit terms like liquid ounces, shots, gills, noggins, fifths, teaspoons, cups, pints, quarts, gallons, barrels, board feet, pecks, bushels, BTUs, milibars, carats, cycles per second, pounds, ounces, troy ounces, drams, tons, caliber, mils, standard gauge, rods, chains, inches, feet, yards, furlongs, miles, nautical miles, fathoms, knots, picas, angstroms, light years, parsecs, acres, townships and sections remain in the American vernacular.  The sluggish progress in thorough American metrification has been excused as the result of ignorance, laziness or complacency by the public.  That may be.  Remember though that American schools have versed students in the metric system for the last 50 years or more.  We can use SI whenever we want to.  We’ve experienced strong-arm attempts to have SI foisted upon us as in the Metric Conversion Act and the Fair Packaging & Label Act.

met3

*  Never the perpetrators of a bloody social revolution like the Russian one, or France’s where mobs decapitated anyone who thought differently or had money, Americans might simply resist metrification because they resist anything totalitarian by nature.  That’s what metrification is; a totalitarian ideal.  It request the wanton destruction, scourge, eradication and abandonment of any other competing form of weights and measurement.  So who’s the real bigot; the unassuming Japanese or American builder who finally learns how to use a conventional tape measure well and sees no reason to change it or some frustrated high school chemistry teacher who wants a dumb-ed down tape measure and for all other alternatives in the world to be immediately destroyed?

*  An otherwise thoroughly metricated country, Japan’s carpenters, builders and realtors still favor their shakkanho length measurements which were acquired from ancient China.  The shaku is the base unit and was originally the length from the thumb to the extended middle finger (about 18 cm or 7 in).  That length grew to approximately 30.3 cm, or 11.93 inches (Kanejaku or “carpenter’s square” shaku).  Floor space in a Japanese house is usually described in terms of a number of single traditional straw tatami mats or a square of two tatami mats (tsubo).  The koku, defined as 10 cubic shaku is still used in Japanese lumber trade.

*  In order to prevent fines and prosecution that other non-SI compliant merchants in Europe have been hit with, the British and Irish have seen fit to pass legislation which protects their traditional non-SI whiskey and beer rations (like gills, pints and Imperial gallons).  When it came to alcohol, it seems as if the rigors of metrification hit a little too close to home.   The UK decimalized its currency back in 1971 and it is the only EU member to have retained its own monetary system- which is also the oldest monetary system still in use.   Few things are as frustrating for a foreigner to comprehend as the meaning of old English tower pounds, sterling pounds, gold sovereigns, guineas, quid, fivers, coppers, crowns, shillings, sixpence, halfpennys, farthings and tuppence.  If there were space enough left in this post these could be explained.  Some of the old legacy Imperial units mentioned previously have very interesting backgrounds as well but explanations will have to wait.  The topic of this post has been the triumphant march of metrification and the liberating, joyful piece of mind and harmony it will bring to the world once its total acceptance is finally complete. —————————–

Captured from an e-mail years ago: somewhere an anonymous wit promotes these additional units – lest they become forgotten in the march of time also…

* 1 millionth of a mouthwash = 1 microscope

* Ratio of an igloo’s circumference to its diameter = Eskimo Pi

* 2,000 pounds of Chinese soup = Won ton

* Time between slipping on a peel and smacking the pavement = 1 bananosecond

* Weight an evangelist carries with God = 1 billigram

* Time it takes to sail 220 yards at 1 nautical mile per hour = Knotfurlong

* 16.5 feet in the Twilight Zone = 1 Rod Serling

* Half of a large intestine = 1 semicolon

* 1,000,000 aches = 1 megahurtz * Basic unit of laryngitis = 1 hoarsepower

* Shortest distance between two jokes = 1 straight line

* 453.6 graham crackers = 1 pound cake

* 1 million-million microphones = 1 megaphone * 2 million bicycles = 2 megacycles

* 365.25 days = 1 unicycle

* 2000 mockingbirds = 2 kilomockingbirds

* 52 cards = 1 decacards

* 1 kilogram of falling figs = 1 FigNewton

* 1,000 milliliters of wet socks = 1 literhosen

* 1 millionth of a fish = 1 microfiche

* 1 trillion pins = 1 terrapin

* 10 rations = 1 decoration

* 100 rations = 1 C-ration

* 2 monograms = 1 diagram

* 4 nickels = 1 paradigms

* 2.4 statute miles of intravenous surgical tubing at Yale University Hospital = 1 IV League and…

* 100 Senators = Not 1 good decision

Yeast & Fermentation

This post endeavors to briefly illuminate a particularly minuscule organism that since the dawn of mankind has exerted considerable influence over the human condition.  Found in the dirt, air and water some yeast also subside naturally – inside all vegetation, animals and humans.  All fungi are parasitic or saprophytic and cannot manufacture their own food.  Since yeast are fungi and all fungi are heterotrophs that live on preformed organic matter some yeasts have been using mankind for far longer than he has been using them.   To state that mankind has domesticated yeast for thousands of years is probably an erroneous statement.  Whether he knew it or not however mankind has been exploiting these individually invisible microorganisms for his own benefit for perhaps ten millennia or more.  The historic relationship between brewing and baking is more intertwined than most readers may appreciate.  Today yeasts are also used to produce food additives, vitamins, pharmaceuticals, biofuels, lubricants and detergents. The more one learns, the more his appreciation grows for these seemingly simple little life forms.  It doesn’t take a degree in organic chemistry or molecular biology to put these little critters to productive work.

Yeasts are more evolutionary advanced microorganisms than say prokaryotic organisms like viruses and bacteria.  Prokaryotes don’t have a nucleus.   Higher life forms like onions, grasshoppers, humans and yeasts are eukaryotes which means their cells store genetic information within a nucleus.  Simpler and more basic than human cells and easier to work with, bread yeast (Saccharomyces cerevisiae) was the first eukaryotic organism to have its genome be fully sequenced.  A genome is the hereditary information stored in an organism – the entire DNA/RNA sequence for each chromosome.

The S. cerevisiae yeast genome possesses something like 12 million base pairs and 6,000 genes compared to a more complex human genome with 3 billion base pairs and 20,000 -25,000 protein coding genes.  Although sequencing has become easier in recent times, 18 years ago the thorough examination of Saccharomyces cerevisiae’s (beer yeast) genome was no simple task.  That project inspected millions of chromosomal DNA arrangements, involved the efforts of over 100 laboratories and was finally completed in 1996 after seven years of hard work.

* The 6th eukaryotic genome sequenced was also a yeast (Schizosaccharomyces pombe – in 2002) and it contained 13.8 million base pairs. 

The mentioning of this 1st accomplished genome sequencing is significant because it was to cause an upheaval in the current accepted classification of yeast species.  There are probably a great number of yet undiscovered yeast species in the wild but presently only a small percentage (between 600 and 1,500 species depending upon your source of information) are currently cataloged.  One of the more important fungi in the history of the world, the classification of Saccharomyces cerevisiae species is very much in a malleable state of flux.  You may read about the many types of bread yeast, or the hundreds of “varieties” of beer yeast or the hundreds of “strains” of wine yeast – but for the most part these share the same DNA and therefore must be considered the same species.   With beer and especially with wines the choice of yeast (strain or variety and species where applicable) can profoundly influence the outcome of the beverage’s flavor profile.

Bad fungus

“Almost all yeasts are potential pathogens” but none of the Saccharomyces species or close relations have been associated with pathogenicity toward humans.   “Candida and Aspergillus species are the most common causes of invasive fungal infection in debilitated individuals”, with 6 species (Candida: albicans, glabrata, krusei, neoformans, parapsilosis & tropicalis) accounting for about 90% of those infections.

Other multi-cellular (non-yeast) fungi affect humanity in various ways: Trichophyton rubrum and / or Epidermophyton floccosum bring us athlete’s foot, ringworm, jock itch and nail infection.  A member of genius Penicillium (with over 300 species) brings us a life saving antibiotic which kills certain types of bacteria in the body.   Claviceps purpurea or “rye ergot fungus” – if not immediately lethal or debilitating, brought us a mind altering alkaloid similar to LSD.  One of the more important negative influences fungi exercise upon us is in the capacity to destroy food crops.  

Domestication ?

A defining characteristic of domestication is artificial selection by humans.  Domestication means altering the behaviors, size and genetics of animals and plants.  These things were not done to yeast in antiquity.   Isolation of certain beneficial yeast strains was only beginning some 200 years ago, in breweries.  Only recently (by 1938) was one scientist was able to cross two separate strains of yeast and come up with a new one.  Although by the 1970’s scientist were beginning to mutate and hybridize yeast, it may be with the more recent attempts to engineer yeast to convert xylose (a wood sugar) into cellulosic ethanol that some additional yeast species can confidently be described as domesticated.  Even then “engineering” is a strong word.  Yeast mutate all the time without human help.  Scientist didn’t create a new fungus but started with examples that already decomposed dead trees or other cellulose containing plant material.  By attenuating the selection process for yeast with numerous cellulase enzymes, scientists hope to produce economical automotive fuel from sawdust and other normally wasted biomass.  The quest for an ideal yeast and bacterial biomass consuming combination is still ongoing.  This particular process defines artificial selection, not gene modification.

Right now, this very moment anyone can capture wild yeast from vegetable matter or from the very air to make bread or to ferment beer or wine.  In antiquity the women folk who cooked and then later bakers, brewers and tavern keepers likely kept a portion of a previous dough or barm yeast culture as a ‘starter’ simply to hasten the development of the next batch.  While this process might support claims of artificial yeast selection throughout history, one might also be reminded that sanitation during those bygone days was questionable and that exposure to wild yeast and bacteria was probably persistent.  It has always been easy to just whip up a new yeast culture from scratch, as will be explained shortly and as revealed in several recipes from a 120 year old cookbook.

bub33

Bread, Beer & Wine

The discovery or invention of wine, beer and bread were unavoidable and early man deserves no special intellectual credit for the achievement because omnipresent yeasts and bacteria did all the work.  Consider the cavewomen that picked a bountiful harvest of wild grapes and then carted these back home in animal skins or clay-lined baskets to be consumed later.  In a few days time wild yeast and bacteria would begin breaking down the fructose and glucose from juice released from crushed grapes at the bottom of any impermeable container.  The oldest available archeological evidence of a fermented beverage comes from 9,000 year old mead (honey wine) tailings found in northern China.  Here probably someone had originally, unknowingly enabled the enzymes from yeast to work by adding water to get all the sticky honey out of a container.  Likewise the inescapable discovery of bread and beer are no mystery.  Raw fresh grain is soft and easily chewable foodstuff.  Dried grain is next to impossible to chew so ancient man was soon mashing it between two rocks to make the powder called flour.  Dry flour is not very tasty so the next obvious experiment would be to add water and later perhaps to cook the gruel in a fire – eventually inventing bread.  Obviously the first breads were probably flat breads.  The proper leavening of bread actually requires several hours of rest for fermentation to create carbon dioxide bubbles which get trapped in gluten to make bread rise.  Had someone boiled a wet soup from the flour instead and then abandoned it because it wasn’t very good, it would have eventually turned into a beer in a few days.  Perhaps the first beer or ale resulted simply from someone’s bread falling into a pot of water.  Regardless, our encounter with fermentation and the invention of both bread and alcoholic beverage was inevitable.

Briefly, Saccharomyces cerevisiae (or sugar fungus) is typical of many yeast species but is a particularly successful species because it can live in many different environments.  Few of the other 64,000 or so members in the Ascomycota fungal phylum can reproduce both sexually and asexually while also being able to break down their food through both aerobic respiration and anaerobic fermentation – all at the same time.

budding yeast

 Under favorable conditions most, but not all yeasts reproduce asexually by budding where one cell splits into two.  On average a particular yeast cell can divide between 12 and 15 times.  In a well controlled ferment aerobic (with oxygen) respiration allows “sugar fungus” yeast cells to reproduce or double about every 90 minutes.  During respiration carbohydrates donate electrons, allowing cell growth, CO2 and water (H2O) production.   During anaerobic fermentation carbohydrates undergo oxidization while ethanol and CO2 are produced.  One yeast cell can ferment approximately its own weight in glucose per hour.  Favorable ferment conditions in this context imply moisture, mineral nutrition, a neutral or slightly acidic pH environment and a narrow temperature range of 50° F to 99° F.  Most yeast cells are killed at temperatures above 122°F.

* (No yeast yet known is completely anaerobic nor is fermentation necessarily restricted to an anaerobic environment).

Under harsh or unfavorable conditions yeasts like S. cerevisiae can become dormant and reproduce sexually by producing spores.  Spores can survive for hundreds of years, perhaps indefinitely, and like many other infinitesimal items can remain airborne for years before coming back into contact with the surface of the earth.  Anyone questioning this assertion should have a look at Lyall Watson’s book, titled “Heaven’s Breath: A Natural History of the Wind”.

DSCF0053dax

A typical yeast cell measures about 3–4 µm (microns or millionth of a meter) in diameter.   Dry packaged yeast as imaged above can survive a long time when refrigerated.  The 3 large bakers yeast packages pictured at the bottom are labeled as containing 21 grams of yeast each.  The 3 brewers yeast packages on top are labeled 5 grams.   Compressed yeast which would contain less yeasts per gram because less water has been removed, is estimated to contain between 20 and 30 billion living organisms – per gram.  The physical volume of that gram would be about the size of a pencil eraser.

Bacteria

In general, bacteria are to be avoided during normal food and beverage production, but as usual there are exceptions.   Many of the approximate 125 species of lactobacillus bacteria are closely associated with food spoilage.  Without the assistance of beneficial bacteria (several of which are lactobacillus members) however we would have no vinegar, chocolate, cider, cheese, kim-chi, pickles, sauerkraut, sourdough bread or yogurt.  Bacteria can drive fermentation by themselves.  More preferably, certain beneficial bacteria can assist yeasts in the fermentation reaction for breads, beers or wines and are sometimes deliberately used to do so.

Enzymes

In baking or brewing it is the enzymes that yeasts or bacteria possess or produce which catalyze chemical reactions and drive fermentation.  A mixture of enzymes might be needed to successfully break down complex longer chained carbohydrates, before either bread leavening or ethanol production is achieved.  In alcoholic fermented beverages, enzymes might be acquired from sources beyond yeast and bacteria, such as from human saliva where for a thousand years descendants of the Incas have chewed maize and spit into common vats to produce the wine called “Chicha”.  The rice wine “Sake” is made with the help of enzymes from a (non yeast) fungus mold named Aspergillus oryzae.  The enzymes used to create the Mongolian horse milk wine known as “Ayrag” or “Kumis” came from the lining of a bag sewn from a cow’s stomach.   There are far too many types of enzymes to list here but the names of some important ones often end in the suffix “ase” (as in: lactase, saccharase, maltase, alpha amylase or diastase, zymase or invertase and alpha-galactosidase).    

Sugar or starch

To briefly outline and oversimplify a topic that deserves more attention: there are many names for, and many types of, starches and sugars and enzymes needed to break them down.  There are simple sugars, complex sugars and very complex sugars or conversely one could say: ‘there are: monosaccharides, disaccharides, oligosaccharides, and polysaccharides’.  Glucose (or dextrose), fructose (or levulose), galactose, and ribose are monosaccharides and examples of the simplest sugar molecules.  Two monosaccharides are found combined in a disaccharide – as in sucrose, lactose or maltose.  Table sugar is almost pure sucrose.  An enzyme like zymase (also called invertase or a dozen other names) is needed to split sucrose into two mono or simple sugar molecules (glucose and fructose) before fermentation of ethanol and CO2 can commence.  Oligosaccharides generally contain anywhere between 3 and 9 monosaccharides.  Polysaccharides are even longer, linear or branched polymeric carbohydrates and may sometimes contain thousands of monosaccharides.  Starch and cellulose are examples of polysaccharides.

Sugarcane was originally indigenous to Southeast Asia and was slowly spread by man to surrounding regions.  In ancient times sugar was exported and traded like a valuable spice or medicine – not as a food commodity.  There was some spread of sugarcane cultivation in the medieval Muslim world but otherwise cultivation did not blossom until the 16th century when colonials reaped their first sugar harvest in the New World (Brazil and the West Indies or Caribbean Basin).  Sugar from sugar beets was never realized until a German chemist noticed that the beet roots contained sucrose.  The first refined beet sugar commodity appeared around 1802.

DSCF0069c3

Baking

Leaven” is the ancient equivalent term for yeast and it caused bread to rise.  Leaven was mentioned in the Bible when Moses led the Israelites out of Egypt, and where they all left in a hurry without waiting for their bread to rise.  Flat, unleavened, unremarkable bread is served during Passover, which is not a Jewish feast or celebration but a remembrance of deliverance, simplicity, haste, and powerlessness.  “Yeast” is a younger word with roots from Indo-European and Old English words meaning surface froth, bubble, foam and boil.  In times past and probably for many centuries, housewives and or cooks usually made both bread and beer on a frequent basis, from a leaven-yeast starter that they maintained in the kitchen.  In both Medieval Europe and colonial North America many households also maintained a constant supply of “small beer” on hand for servants and children or for general consumption.  Small beer had low alcohol content but some taste and since it was pasteurized it was usually much safer to drink than the local water.  Two centuries ago some children drank small beer with breakfast just like today’s children might drink orange juice.

Almost all bread before the 1840s was probably a form of sourdough bread.  Without the help of either bacteria or refined sucrose, S. cerevisiae yeast alone cannot properly break down the starches (polysaccharides or carbohydrates) in flour, work its fermentation or cause bread to rise.  In the early 1800s, for the fist time, collective bakers began making sweet breads (as opposed to sour) by using bottled yeast skimmed off and collected from ale (beer) vats.  This renaissance in baking quickly spread outwards from Vienna, Austria.  In general, bakers started buying top-fermenting beer yeast from brewers.  Initially the yeasts were collected by skimming barm or krausen off the top of a beer vat and putting it into bottles.  In about this same time frame another renaissance or revolution was occurring in the beer world.   German brewers were learning to make lagers, which employed different (bottom dwelling) yeast and much cooler and longer fermentation periods.  At the time lagers were a taste sensation and considered a great improvement over the heaver ales.  With many brewers ‘changing horses in mid stream’ to use different yeast and processes in order to jump on the lager bandwagon, bakers in Vienna and elsewhere were left without convenient sources of sweet yeast.  To fill that void ‘press yeast’ was developed.  The forerunner of modern baker’s yeast, press yeast was first skimmed from the top of a dedicated grain mash and washed and drained carefully before being squeezed in a hydraulic press.  Modern baker’s yeast have pretty much been selected for optimum carbon dioxide production.  Such yeast would still make good ale.  Bread dough makes alcohol while fermenting but that escapes when it is baked.

* The grains corn and rice have no gluten.  To make breads with these grains rise, flour with gluten must be added. 

* “Quick breads” like biscuits, pancakes, bannock, scones, sopapias and cornbread are made with “self rising flour” or regular flour with the help of a baking power.  Self rising flour merely contains its own baking powder.  Baking powder is a mixture of soda, acid salts and starch (which helps keep the other two ingredients inactive).  Baking powder is basically a little bomb, a little electrochemical reaction for making gas bubbles; waiting only to be triggered by the addition of liquid.  

beer_bread4

Sourdough bread

Sourdough is a vague term.  There are many ways to create a sourdough starter.  While the name implies a sour taste due to contribution of bacteria and / or wild yeast, some sourdoughs taste little different than normal commercial sweet bread.  Some sourdough starter recipes actually call for baker’s yeast to be used while others might begin with pineapple juice, potatoes or even yeast captured in an opened can of beer left on the kitchen counter top for about a week.  A characteristic practice of sourdough bread making is that a portion of the ‘sponge’ is to be retained after each dough batch and is stored in a cool place to be used as the next starter.  ‘Sour mash’ whiskey has the same connotation – part of the original yeast and enzyme culture is retained and used in the next batch – maintaining consistency of product.   In brewing “re-pitching” the yeast is similar to using a sourdough starter; a portion of the live yeast from the bottom or top of a wine must or grain mash is saved to be reused again.

In the 1840s as the first Bavarian lager technology was reaching America, gold miners were about to congregate in the California Gold Rush.  San Francisco is a modern bastion of sourdough bread patronage with some restaurants or bakeries claiming to have maintained the same starters since the Gold Rush days.  One species of lactic acid bacteria found in some sourdough is actually named after the city: Lactobacillus sanfranciscensis.  Also these starters might include species of yeast (like Saccharomyces exiguous or Candida milleri) that can leaven bread by working on polysaccharides instead of simple sucrose.

Homemade yeast

While fresh compressed yeast was becoming common in the urban food markets of Europe and America by the 1870’s, many individuals (especially those in remoter areas) simply made their own yeast.  The “White House Cook Book” was an authoritative publication ((c)1887 and before) used by ambitious housewives across the country.  The book gives several recipes for starting a yeast culture, including the use of milk or salt and even drying the yeast into cakes for later use.  One of the book’s recipes for yeast is simply titled “Unrivaled Yeast” and it resembles the following (actual recipe is on p.242):

- boil 2 oz. of hops in 4 qts. of water for 30 minutes, strain and let cool

- mix this water in large bowl with 1 qt flour, ½ cup salt and ½ cup brown sugar –let stand for 3 days

- mix this with 6 boiled and mashed potatoes – let stand for another day, stirring frequently.  

- ready to use or to be stored in bottles for future use (good if kept cool for about 2 months).

Obviously the yeasts native to the potatoes were killed by boiling, so yeasts from the atmosphere and perhaps flour as well were the ones captured.  Sanitation and sterilization of utensils was and still is important to limit the procreation of undesirable bacteria.   Hops (flowers of the Humulus lupulus plant) are frequently mentioned in these older recipes because hops which were also used as herbal medicine, act as an antiseptic \ antibiotic preservative by inhibiting bacterial growth but not beneficial yeast growth.

* The Reinheitsgebot or Bavarian Purity Law of 1487 – specified the use of only water, barley and hops – for the brewing of beer.   The contribution of yeast was not appreciated but the antibacterial benefits and virtuous bitter flavor components of hops were.  Evidence suggests that hops were being used in Bavarian beer as early as 736 in an abbey outside Munich.  The Reinheitsgebot also had the effect of discouraging competing imported Belgian beers which preferred to use gruit and of preserving the wheat harvest for those needing to bake bread for food. 

There are many, many other interesting facts to discuss about yeast, enzymes and bacteria in regards to fermentation but this post has to draw a conclusion or come to an ending somewhere.  No more time will be taken to examine yeast killing sulfides in wine, the alcohol tolerance of different yeasts, turbo yeast or how Champagne is created by secondary fermentation.  Somehow it seems that yeasts have used us just as much as we have used them.  We have changed their nature little – if at all.  For the small percentage of yeast species we have identified, we are on the verge of understanding the true nature of just a few.

Water Turbines

2wheelsc

The Egyptians were using mechanical energy to lift water with a wheel in the 3rd century BC.   Four hundred years later in the 1st century AD Greek, Roman and Chinese civilizations were using waterwheels to convert the power of flowing water into useful mechanical energy.   The word “turbine” was coined from a Latin word for “whirling” or “vortex”.  The main difference between a water wheel and a water turbine is usually the swirl component of the water as it passes energy to a spinning rotor.  Although the Romans might have been using a simple form of turbine in the 3rd century AD, the first proper industrial turbines began to appear about 200 years ago.  Turbines can be smaller diameter for the same power produced, spin faster and can handle greater heads (water pressure) than waterwheels.  Windmills and wind turbines are generally differentiated by the reasoning that windmills turn wind-power into mechanical energy whereas ‘wind turbines’ convert wind-power into electricity.  This post attempts to reveal to those individuals with an exploitable water source that – modest advancements in ‘micro’ hydro technology have made it feasible for them to potentially create useful power from low water heads or from very modest water sources.

wheel_blank4h2

Above the horizontal undershot waterwheel requires the least engineering and landscaping labor to install; the width of the runner can be tailored to match the flow rate and only a small water ‘head’ is required.  The ‘breastshot’, ‘overshot’ and ‘backshot’ styled waterwheels get progressively more efficient.

Water head can be thought of as the weight of water in a static column.  Since fluids don’t compress, the weight of water in a pipe is directly related to its pressure at the bottom (measured as psi or pounds per square inch).  As a stream drops in elevation its head is a measurement of that drop.  Water weighs 62.427 lbs per cubic foot.  There are 1,728 cubic inches in a cubic foot.  A cube of water 12” high, 12” wide and 12”deep would have a psi of ((62.427 / 12) /12)  or 0.433 lbs. per square inch.   Any column of water 1 ft. high, regardless of width, still has a water head of 1 ft. and a psi of 0.433 lbs/in².   Water drop is simply multiplied by the constant 0.433 to determine the potential psi. 

boydt2

Boyden turbine

A Frenchman named Fourneyron invented the first industrial turbine in 1827.  The idea was brought to America and improved upon in the form of the Kilburn turbine in 1842.  By 1844 a conical draft tube addition resulted in the Boyden turbine.  There were dozens of Boyden turbines in operation in northeast America by the time radical abolitionist John Brown raided Harper’s Ferry in 1859.   Located at the confluence of the Shenandoah and Potomac rivers, Harper’s Ferry was a national armory and a beehive of activity where gunsmiths made small arms.   In 1859 at least 2 Kilburn and 5 Boyden turbines were driving the  jack-shafts and belts needed to the power lathes, sawmills and other equipment necessary to keep 400 employes busy at the armory.

Fourneyron’s turbine and subsequent Kilburn and Boyden types were further followed themselves by increasingly efficient  turbines including:  the Leffel double turbine, John B. McCormick’s mixed-flow turbine, the New American and Special New American turbinesAll of these are known as outward flow reaction turbines (which are reminiscent of cinder, sand or fertilizer spreaders – but with water spraying out at the bottom).

damt4b

A different type of turbine called an inward flow (or radial flow) reaction turbine was developed by James b. Francis in 1849.  In the snail shaped Francis turbine water is sucked into a spiraling funnel that decreases in diameter.  Used at the beginning of the 20th century mainly to drive jack-shafts and belts for machinery in textile mills, Francis type turbines soon became the type favored for hydroelectric plants and are the type most frequently used for that purpose today.  This <link to an image> apparently taken in Budapest before 1886 shows what looks to be a Francis turbine being installed in the vertical axis rather than the horizontal axis.

kap4u

A “runner” is that part of a turbine with blades or vanes that spins.   As with any other turbine the scale of dimensions can be adjusted up or down to suit individual needs.   Although small Francis turbines are produced the ones used in large hydroelectric power stations are impressively huge – some producing more than a million horsepower each (1,341 hp = 1 Megawatt).   The largest and most powerful Francis type turbines in the world are in the Grand Coulee Dam (Washington USA).  The runners of the turbines there have diameters of 9.7 meters and are attached to generators producing as much as 820 Mw each.   China’s “Three Gorges Dam” is capable of the world’s largest electrical output however with 32 main generators producing an average 700Mw each for a total 22,500 MW optimum output.   Located between Brazil and Paraguay the world’s second largest dam (in terms of generating capacity) is the Itaipu dam with 20 Francis  turbines powering 700 MW generators.   In 2012 and 2013 Itaipu’s annual electrical output actually surpassed that of Three Gorges due to the amount of  rainfall and available water.

kap2cc

Another type of reaction turbine was developed by an Austrian in 1913 looks like a boat propeller.  Some windmills are called Kaplan turbines.  The blades or vanes on a Kaplan designed hydro turbine are adjustable, allowing the turbine to be efficient at different workloads or with varying water pressures.  Although complicated and expensive to manufacture, the Kaplan design is showing up more frequently around the world, especially in projects with low-head, high flow watersheds.  They can be found working in the vertical or the horizontal planes.  Large Kaplan turbines have been working continuously for more than 60 years at the Bonneville dam.  The Bonneville dam is on the Columbia River between Washington and Oregon, several hundred miles downstream from the Grand Coulee dam.  Both dams were started at the same time during the depression and were initiated by Roosevelt’s (FDR’s) “New Deal”.  Small inexpensive Kaplan turbines (without adjustable vanes) can be made to work in streams with as little as 2 feet of head.

Tyso2

The so called “Tysonturbine looks like it could qualify as a Kaplan turbine but  this modern example of micro hydroelectric technology encases its own generator in a waterproof housing.  The unit is submerged into a stream and usually suspended from a small tethered raft.  The stream can be shallow but obviously a high flow rate will encourage the best electrical generation.

Yet another type of water turbine is tenuously referred to as a “crossflow turbine.   In the early 1900’s two individuals on opposite sides of the world independently contrived about the same turbine design.  A Hungarian professor named Banki and an Australian engineer named Mitchell invented turbines that combine aspects of both a reaction (or constant-pressure) turbine and an impulse (or free jet) turbine.  The runner of a Banki -Mitchell (or Ossberger)  crossflow turbine is cylindrical and resembles a barrel fan that one might find in a forced air furnace or evaporative swamp cooler.  The design uses a broad rectangular water jet that travels through the turbine only once but travels past each runner blade twice.  The moving water has two velocity stages and very little back pressure.

banki4c

Most suited to locations with low head but high flow, low-speed cross flow turbines like this have a flat efficiency curve (the annual output is fairly constant and not as much affected by fluctuating water supply as are some other designs).  Large commercial crossflow turbines are manufactured that can handle 600 ft. of head and produce 2,500 hp.  Small homemade Banki – Mitchell units have been constructed that are capable of producing about 400 watts using a car alternator with 5.5 CFS (cubic feet/sec) of water from a stream with a head of only 33 inches.  These units can make considerable noise, so to keep vibrations minimized these turbines should be well balanced and spun at moderate revolutions per minute.

noz_spoon4y

Two rising celebrities in the world of mini or micro hydroelectric technology are both impulse turbines.  The Pelton wheel or runner works in the vertical plane usually, and the somewhat similar Turgo in the horizontal.  Water pressure is concentrated into a jet that impacts spoon shaped cups of the Pelton or curved vanes of the Turgo.  These systems capitalize on high head, low flow water sources.  Turgo runners are sometimes quite small (like 3 or 4″ in diameter) and are designed to run at high speeds.   A small uphill water source and enough penstock (piping) to reach it are the main requirements necessary to make one of these small impact turbines useful.   Under the right circumstances a small Pelton or Turgo wheel of just a few inches in diameter is capable of producing perhaps 500 watts.   In the absence of running streams, snow pack or plentiful rainfall an individual living in a mountainous area might still be able to collect up-slope groundwater from perforated pipes buried in boggy areas, springs or the drainage ditches alongside roads.  A long run of water hose, polyethylene or polyvinyl chloride (PVC) pipe could conduct the water down slope, which would gain another pound per square inch of pressure for every 2.31 feet of drop.  Water catchment from barn and house roofs could be redirected to holding cisterns and used by these little turbines when appropriate to augment other alternative off-GRID power systems.

flickr-762561606-hd

Built 1901 – used to power the mining town of Victor, CO. Courtesy of Gomez.

The Pelton wheel was patented in 1880 but Lester Allan Pelton actually got the idea from using and examining similar Knight water wheels in the placer mining gold fields of 1870’s California.  Employing fluid often diverted by sluices to a holding pond before being collected into a penstock and dropping further, miners washed entire hillsides away with jets of high pressure water.  The tip end of this water cannon was a nozzle called a “monitor” and there was no ‘off button’.  Most of these hydraulic mining monitors spewed water around the clock so it was probably just a matter of time before some enterprising miner attempted to convert that wasted energy into a useful mechanical energy by spinning a wagon wheel with pots and pans attached to its rim.  While ‘Knight wheels’ (the 1st impact water turbines) were originally constructed to power saws, lathes, planers and other shop tools some were actually used in the first hydroelectric plants built in California, Oregon and Utah.  Lester Pelton’s innovation was to extract energy more efficiently from a water jet by splitting the cup and deflecting the splash out of the way.

sine2c

Between the 1870’s and the 1890’s innovations for both hydroelectric turbine and alternating current development were occurring at breakneck pace.  The first hydroelectric power schemes began to appear after 1878 onward and for several years created only DC current.  In three years between 1886 and 1889, the number of hydroelectric power stations in the U.S. and Canada alone quadrupled from 45 to over 200.  AC development milestones during this period include: step up and step down transformers, single phase, polyphase or triple phase AC, and great improvements in the distance of power transmission.   <This site> provides an interesting history and timeline on the maturation of AC power.

The Ames hydro electric power plant in Colorado claims to “be the world’s first generating station to produce and transmit alternating current”.   Perhaps that claim should be amended to specify only “AC for industrial use”.  Originally the Ames plant attached a 6 foot tall Pelton wheel to a Westinghouse generator.  The largest generator ever built up to that time, it made 3,000 volts, single phase AC @ 133Hz.  The Pelton wheel was driven by water from a penstock with a head of 320 feet.  The power was transmitted 2.6 miles to an identical alternator/motor, driving a stamp mill at the Gold King Mine.  The mine owners chose this newfangled electricity over steam powered machinery because of the prohibitive cost of shipping coal by railway.  In 1905 the Ames power plant was rebuilt with a new building, two Pelton wheels with separate penstocks from two water sources and a General Electric generator of slightly less output capacity.  After 123 years this facility’s impact turbines are still producing electricity.

The success of the Ames power plant along with a well done 1893 World’s Fair exhibit by Tesla and Westinghouse helped determine a victor in the famous “War of the Currents” and more immediately, who would win the prestigious Adam’s power station contract soon to be constructed at Niagara Falls.

four4

The main characters in the ‘War of the Currents’ were (from left to right above) the DC proponents Thomas Edison and J.P. Morgan and their AC rivals Nikola Tesla and George Westinghouse.  Pride, patents, reputations and big money were at risk in this somewhat ridiculous conflict.  At its peak the quarrel was exemplified by Edison going about the country and staging demonstrations wherein he electrocuted old or sick farm & circus animals with ‘dangerous’ AC current.  It is rumored that the electric-chair used for executions was itself created due to a secret bribe from Edison.  In response Tesla staged some carefully controlled demonstrations where he shocked himself with AC to prove its safety.  In truth both DC and AC currents are potentially deadly at higher voltages, but AC may ‘win out’ slightly because its alternating fluctuation might induce ventricular fibrillation (where the heart looses coordination and rhythm).

* For those that may not know: Edison was a prominent inventor who formed 14 companies and held 1,093 patents under ‘his’ name although his formal education consisted of only 3 months schooling.  The largest publicly traded company in the world (General Electric) was formed by a merger with one of Edison’s companies.   JP Morgan was one of the most powerful banker/ financier/ robber barons in the world in the 1890’s.   JP reorganized several railroads, created the U.S. Steel Corporation, and bailed the government and U.S. economy out of two near financial crashes – once in 1875 and again in 1907.  He was also self conscious about his big nose and did not like to have his picture taken.   Recognized as a brilliant electrical and mechanical engineer Tesla never actually graduated from his university.  Immigrating to the U.S. in 1884, Tesla even worked for Edison before the two had a falling out.   Westinghouse attended college for 3 months before receiving the first of his 100 patents and dropping out.  He went on to found 60 companies.  

Although AC has been the favored method of current transmission for the last century, in the War of the Currents, DC power never fully capitulated.  Considering storage benefits, DC may someday stage a spectacular comeback.  In Cities like Chicago and San Francisco an old DC grid may run parallel to its AC complement.  Most consumer electronics convert AC into DC anyway.  DC offers some advantages over AC, including battery storage which provides load leveling and backup power in the event of a generator failure.  There is no convenient way to store excess AC power on the GRID so it is shuffled around for as long as possible.

tra4b

Alternating current originally offered advantages over direct current in its ease of transmission.  High voltage / low current travels more efficiently in a wire than low voltage / high current will.  The introduction of the transformer (which works with AC but not DC) allowed AC to be “stepped up” to a higher voltage, transmitted and then stepped back down to usable power at the destination.  DC current (under the Edison scheme) had to be generated very close to its finial destination or otherwise use expensive and ungainly methods to achieve transmission over longer distances.  Voltage drop (the reduction of voltage due to resistance in the conducting wire) affects both currents equally.  Due to resistance, some power will be lost as heat during transmission.  AC suffers from a resistance loss during transmission that does not affect DC.  “Skin effect” is the tendency of AC to conduct itself predominately along the outside surface of a conductor rather than in the conductor’s core.  The whole wire is not being used – just the skin.  This skin effect resistance increases with the frequency of the current.  This phenomenon along with new technology for manipulating DC voltages has recently encouraged several companies to construct new High Voltage Direct Current (HVDC) power lines for long distance transmission.  The Itaipu Dam mentioned earlier for example transmits HVDC over 600 kV lines to the São Paulo and Rio de Janeiro – some 800 km away.

The huge dams built in the U.S. were not created to provide electrical power to customers but to control and redirect water for the purpose of agriculture.  Even today the bulk of power created by those dams is used to pump water back uphill so that it can be broadly distributed by irrigation.  In 2008 the U.S. Energy Information Agency (EIA) estimated that only 6% of the nation’s power was generated hydroelectrically and that amount has changed little in the last 5 years.  The EIA  does predict a growth in the future for photovoltaic and wind generated power.  Canada with a much smaller population supplies itself with a greater percentage of hydroelectric power than the U.S. and also has more kinetic energy available in terms of exploitable water resources.

wmill3c

- Wind turbines, water turbines, Archimedes screws and centrifugal pumps in reverse can be mounted to the same types of alternators or generators.  Small or miniature turbines can be affixed to a wide range of DC motors from tools, toys, treadmills, electric scooters, old printers, stepper motors and servos.  Commonplace AC induction motors from laundry machines, blowers, furnaces, ceiling fans, tools and other sources can be converted into brush-less low rpm alternators by rewiring or installing permanent magnets in the armature.  Usually but not always in a modest off-grid power scheme AC current from an alternator or magneto needs to be rectified into DC so that the energy can be stored in a deep cycle ‘battery sink’.  Automotive alternators contain their own rectification but these are less than ideal for turbines for a couple of reasons.  Charge controllers and inverters are also pertinent subjects in the discussion of alternate energy.  These topics may be addressed in some future post.  For now a final image (of a rectifying ‘full wave bridge’) and some miscellaneous video links are offered.

full_wave_bridge3b

CIVIL 202 – Pelton wheel project –  3 minute video – school science project

Micro Hydro Electric Power- off grid energy alternatives – 7.5 minute video – something of an advertisement

Home Made Pelton Wheel – rather long 12 minute video

Turning Green In Oxford – 9 minute video / power by Archimedes screw

Algonquin Eco-Lodge – 8 minute video – generating by reversing water flow through centrifugal pump