Thursday, April 18, 2019

NASA to Open Lunar Samples Untouched Since Collection

Frankly, this article shocked me, but according to a news article from Machine Design, NASA and the Lawrence Livermore National Laboratory will soon be opening some lunar samples that have not been touched since Apollo 17 at the very end of 1972.  The samples were sealed while on the moon to keep them "vacuum packed" and have never been opened.
Nine “special samples” were collected during the Apollo 15, 16, and 17 missions and stored in containers with indium knife-edge seals to maintain a lunar-like vacuum. Apollo mission planners devised these special sample containers to meticulously preserve fragile and transitory sample characteristics (e.g., solar wind volatiles and volatile coatings). Three of these samples have remained sealed in their original Apollo containers until today.

Cosmochemists at Lawrence Livermore National Laboratory will get a chance to analyze these Apollo 17 relics to study the geologic history of the site where the rocks were collected, a geologic cold trap where water may have been able to freeze. This marks the first time a sample will be studied in detail since the end of the Apollo program.
The surface of the moon is covered by a fine, powdery, dust called regolith created by meteoritic bombardment of the Moon’s surface over the past 4.5 billion years.  Volatile elements, such as those from coronal mass ejections from the sun, can get trapped in the powdery regolith and the techniques for finding, identifying, and determining quantities of these volatiles have improved in the intervening 50 years. 

One of the neatest aspects of this new analysis is that the guy who collected some of these samples, Geologist Harrison Schmitt, 83, will be part of the LLNL team doing the studies.

(Harrison Schmitt on the lunar surface during Apollo 17 - his only space flight.  It's only when you look at device in the right foreground that you realize you're looking at a color photograph, the scene is so monochromatic - NASA photo)
A new NASA program, the Apollo Next Generation Sample Analysis, has selected nine teams to extend the science legacy of the Apollo missions by studying pieces of the Moon that have been carefully stored and remained untouched for nearly 50 years. LLNL is part of the University of New Mexico team of scientists that will look at the vacuum-sealed samples to study both the volatile element record and the geologic history of the Apollo 17 site.

The teams were selected by NASA’s Planetary Science Div. and will be funded by the space agency’s Lunar Discovery and Exploration Program. The goal is to get the most data possible from these samples in preparation for future lunar missions anticipated in the 2020s and beyond.

LLNL will conduct the measurement of noble gasses, as well as analyze major and trace elements and chronology on large clasts. Specifically, they will determine how noble gases were modified by meteorite impacts on the regolith, define the source of hydrogen in hydrogen-bearing minerals in the regolith, and investigate the origin of meteorites that hit the Moon through its history. The group will also determine the ages of samples in the regolith using a variety of dating techniques to better understand the timing of crust formation on the Moon.
One would have to consider these specimens to be essentially priceless and irreplaceable, at least for now.  Naturally nobody is just going to pop open the vacuum container and say, "now what?"  That's going to be decided long in advance.  Before that happens, the teams will meet at NASA's Johnson Space Flight Center in Houston for planning sessions to determine the best way to open the samples to avoid contaminating them or destroying opportunities to learn something from them.

Wednesday, April 17, 2019

Question for Fellow Google Bloggers

Is anyone else seeing a big increase in Spam comments?  A couple of weeks ago, I woke up to something like 10 comments to 10 different posts - like the last 10 posts I put up.  They all contained links to some commercial site.  Deleting Spam is "all in a day's work" as they say.

This is different.

It may be the way I have things set up that makes them standout to me, but lately I've been getting about a half dozen to a dozen Spam comments every day.  They're always to old posts, and this is the strange part, they will comment repeatedly to the same post over a period of days.  I've had posts from 2010 or 2014 that just get commented to over and over again.  It's like they latch onto that post and keep trying to get through.

The way I have the blog configured is that comments to posts over 14 days old go into moderation.  I do this for a couple of reasons: first, to know that comments to old posts have been made.  The blog displays about the last 30 posts, about a month, and I don't go watch comments on posts more than a few days old.  When they show up in my Gmail inbox, I can go read and react if necessary.  The second main reason is that older posts tend to be found by whatever mechanisms the spammers use and it's best to delete those comments rather than clog up the reading for people who come across the post later.  Occasionally, though, I get comments on old posts that are valid and related to the post.  I approve those and let them post.  My twin posts on getting ripped off by Ian Sinclair Design for their credit card knives are an easy, good, example.  Between the two posts, they got 128 comments over more than two years.   

These latest Spam comments have all sounded like they're either auto generated, or left by non-English speakers.  They alternate between random words, flattering comments on how wonderful the blog looks, or ask questions about things that seem designed to get me to respond.  It might be an AI system learning or just a simple SpamBot. 

My assumption is that if I post them, that will tell the spammers that they can post their ads or other things they're posting.  Or get their credit for whatever they're doing.

Anyone else seeing this?

(Spambot image source)

Tuesday, April 16, 2019

SpaceX's Falcon Heavy Wasn't A Perfect Day After All

News was released yesterday that the center booster of the Falcon Heavy was lost at the drone ship due to sea conditions.
"Over the weekend, due to rough sea conditions, SpaceX's recovery team was unable to secure the center core booster for its return trip to Port Canaveral," SpaceX representatives said in an emailed statement. "As conditions worsened with 8- to 10-foot swells, the booster began to shift and ultimately was unable to remain upright. While we had hoped to bring the booster back intact, the safety of our team always takes precedence. We do not expect future missions to be impacted."
This is the time of year when it's very common for the area around the Cape to get strong winds off the ocean and those winds bring rough seas.  While it's a shame to see them lose the booster, reality is the system has to be designed for these seas.  Balanced all the while against the cost of a ship that much bigger and more stable than the ones they're using.

It's widely reported that the ships are autonomous, and the crews are relocated onto another ship that stands back well away from the drone ship while the landing attempt is made.  The first step is for a crew to return to the ship and secure the booster to the deck by welding hold down brackets to the landing feet on the Falcon 9, according to the Wikipedia entry.  The statement from SpaceX makes it sound as if they viewed it too dangerous to deploy the welders onto the drone - or to leave them there if they were already aboard.

When you look at the feet of the booster, remember these things are a lot bigger than you might think.

The same view with some workers near the legs adds perspective.  Those hold downs aren't standard U-bolts you're going to find at Ace Hardware.  

And the Atlantic off the Florida east coast gains another stretch of artificial reef a bit over 225 feet long and 12 feet across.

Monday, April 15, 2019

Monday Odds and Ends - Peak Florida

Two stories that are short and don't belong together.

Peak Florida

You're probably thinking of more Florida man/woman stories.  This time it's a Florida Reptile.

You may know we have a problem with Burmese Pythons taking over the ecosystem in the Florida Everglades.  I know I've done a few stories on these (cool photo or useful map of python range).  Our local paper carried a story that shows even a bad python may have a silver lining (?)  It seems the pythons are killing off rattlesnakes by carrying a parasite that is decimating the pygmy rattlesnake population.
Now, Burmese pythons are killing — although indirectly — one of their own ilk, the pygmy rattler.

A new study, led by researchers at Stetson University, shows that parasitic worms spread by invasive Burmese pythons are killing native Florida pygmy snakes.

The researchers found the invasive worms in Central Florida, more than 100 miles away from where the Burmese pythons reside in the southern portion of the state. But that doesn't mean the pythons are there. The parasite is getting that far north by other means, hitching rides in reptiles and other host critters that Florida snakes eat, with risk of spreading far beyond the Sunshine State.
Yes, I know that rattlesnakes can be important predators in the ecosystem, but I don't mind a few less pygmy rattlers.  While I know people who have encountered very large eastern diamondback rattlers, pygmy rattlers are more likely to brush up against the people in suburbs and more rural areas.  

OTOH, this being Florida, Australia of the northern hemisphere, the pygmy rattlesnakes are probably keeping something even worse under control.

Back on the first Sunday of March, I posted about Mrs. Graybeard's painful trip over the hose while setting up to wash the cars.  For the rest of March and into last week, she was confined to a walker with instructions to not put any weight on that foot, along with lots of other restrictions.  We did x-rays every week to ensure the part of her thigh bone that broke off didn't displace but started to attach to the rest of her femur.  

Last Monday, week 5, the doctor cleared her to start putting more weight on that, and to get around on a cane instead of the walker.  We have both the cane and the walker from earlier "adventures" so it was easy to transition. She pretty much re-achieves some extra motion and extra ability daily, getting a bit more back to normal a little at a time. 

Today we did what we probably would have done a month ago, and went to our local multi theater to catch Captain Marvel.  Unless you pay no attention to the Marvel Cinematic Universe (MCU) you'll know that calling for Captain Marvel was Nicky Fury's last act at the end of Avengers Infinity War and there are scenes in the trailers showing her working with the remaining Avengers for Endgame.  It was originally rumored she was the only Avenger strong enough to take on Thanos.

Let me start out by saying this is a good movie that belongs in the MCU; it's not stuck on like they didn't know where else to put the character.  Both Mrs. Graybeard and I were dreading that it was going to be too full of "Grrrl Powerrr" stuff and it wasn't.  Captain Marvel is played by Brie Larsen, and back in 2017, when I wrote about Kong: Skull Island, I referred to her as "the designated pretty girl" part.  I didn't think much of her because she was really a background character playing a stereotype role in a comic book movie.  In this movie, she shows quite a bit more acting range and is actually quite good. 

The movie is Captain Marvel's backstory to help setup Avengers Endgame.  I don't really want to do paragraphs explaining the plot - go see it.  In using this movie to do Captain Marvel's story, they devote the full two hours to her story which makes much more sense than a half hour tacked onto Avengers Endgame.  This way, I'll bet Marvel makes both movies more watchable and gives a more complete storytelling than, say, the first half of Wonder Woman doing her backstory.

If Thor Ragnarok is my favorite of the MCU movies, I can't rate this one higher than that, but this was a good, fun movie.  Better than Black Panther, not as funny as Ant Man or Ragnarok, and better than I expected.  Good solid 4 and some change out of 5.  The movie starts with a modified opening that replaces the familiar flipping comic book pages with a tribute to Stan Lee.  Stan appears in this one in a cameo, and I understand he appears in Endgame, too.  The usual Marvel previews during and after the credits are worth waiting for. 

Oh, and watch for the cat, Goose. 

Sunday, April 14, 2019

Radio Sunday #3 - The Birth of the Modern Receiver

The Superheterodyne – Part 1

Now we're back where we started, before the look at earlier architectures for receivers. The most common receiver design is the superheterodyne, developed by Edwin Armstrong in 1918 – during World War I – and just after the Tuned RF design. Undoubtedly both men were working on the same problems and Armstrong found a way to overcome many of the problems of the TRF design along with other competing ideas. The superheterodyne principle of operation is followed in virtually every modern radio; whether or not that design is implemented in hardware or software.

So what does superheterodyne mean? It is essentially a buzzword; an advertising line. The best explanation I've read is that super was a big advertising word, hetero came from the word for “different” referring to different frequencies and dyne from power. Heterodyne has come to mean to combine two signals by multiplication, which produces the sum and difference of the two signals. This process is widely used, but it's usually called mixing, so that the components that do this on purpose are called mixers. Multiplication? Like light, if you put two radio signals through a medium like the air, or (for radio) a wire, they stay as their separate frequencies and don't affect each other. Only if you combine them in a circuit that affects their amplitudes nonlinearly will you get the sum and difference frequencies.

Let me present a block diagram of a superheterodyne (superhet) receiver that looks more conventional to hams and others who have studied some electronics than the one I posted in the Receiver Hunting story.

The big X in the mixer is to signify that it's multiplying two signals times each other; due to a property of math, when we multiply two sine waves, we get the sum and difference of the two frequencies.  Real mixer circuits put out four frequencies: the RF, LO, and the sum and difference frequencies.  From a circuit performance standpoint, the best mixers are called double balanced mixers; others exist but thousands or millions of double balanced mixers are sold every year.  The balance refers to electrical currents being balanced in the mixer to help with various performance measures I'll get to later. 

Without the demodulator and audio amplifier, this is a frequency converter.  Back around 1981, I made a shortwave converter for my car.  The local oscillator was two crystals in an oscillator set so one was switched on at a time.  They were set to 9.000 and 14.000 MHz.  The filter took the difference of the input minus the crystal.  That meant that with the 9.000 MHz crystal, 10.000 MHz WWV (time and frequency standard) came out at 1.000 MHz so I could listen to it on my pickup truck's AM radio.  Likewise with the 14.000 MHz crystal, 15.000 MHz WWV came out at 1.000.  I could listen to the “31 meter” and “19 meter” shortwave bands; 9.55 to 9.95 MHz and 15.1 to 15.4 MHz. 

This approach is universally used for frequency converters like this.  Some years later, I designed a two meter transverter for my ham radio station (transverters work for transmitting and receiving).  This took low level transmitter signals and mixed it with a crystal, taking the sum, not the difference.  For the oscillator, I used a 116.000 MHz crystal.  When I set to transmit at 28.000 MHz, the transverter output 144.000.  On the receive side, the system went the other way; it took in 144.000 MHz, subtracted off 116.000 and put 28.000 into the radio. 

There's problem lurking here.  All superheterodyne receivers will receive the undesired mixing product just as strong as the desired.  It's called the Image frequency.  In the case of my shortwave converter, I wanted to receive LO + IF.  My receiver would put out just as strong a response at LO – IF.   That would be at 8.000 or 13.000 MHz depending one which crystal I turned on.  That's why the block diagram has that RF filter at the input – to reduce the strength of anything on the image frequency.

(This image is based on the common IF of 455 kHz, mentioned below)

Mixers can be made in many different ways; Armstrong used vacuum tubes, and as transistors and FETs appeared, those were adopted into service.  The following picture shows what's called a Double Balanced Mixer: a four diode bridge with the Local Oscillator applied on the left, the Radio Frequency applied on the right and the Intermediate Frequency taken out of the tap at the lower right.  These also work “the other way”.  If you apply a low frequency to the IF, perhaps your modulated audio for a transmitter, the RF port becomes an output, not an input.

So why do we do this?  Why do we build an oscillator into the radio, and add the mixer stage?  The architecture buys you some important things
  • It allows you to spread out the gain between RF, IF and audio.  For ordinary use, you might need gain of over a billion.  That would surely oscillate if the radio was all one frequency.  If you spread the gain out intelligently, the chance of a problem goes to zero.  
  • You only tune two circuits: the RF filter and the Local Oscillator.  In practical radios, your RF filter may be switched from a bank of similar circuits as you change bands, and the LO will have components switched so it can tune several bands.  Most of the fussiness of tuning wide frequency ranges in the TRF approach goes away.  
  • You have most of the receiver working on one frequency.  Amplifiers and filters change performance as you tune across your frequencies of interest.  Typically, the higher you tune, the lower the gain goes.  Now you have one stage to be concerned about.  Ordinarily, the IF filter is where the ultimate channel selectivity is obtained; with this approach it stays the same whichever RF band you tune. 
This architecture is universal.  It can mix a low frequency signal up to a higher IF or mix a higher signal down to a lower IF (appropriately called upconversion or downconversion).  Cheap AM radios (remember them?) settled on an IF just below the bottom of the AM broadcast band, 455 kHz, long ago.  Multiband radios with shortwave coverage sometimes upconvert to another IF, usually because there's a filter they want to use first, then downconvert to 455 kHz, in an architecture called dual conversion.  Microwave receivers downconvert, sometimes a dual conversion, to get to a frequency where the signals are processed. 

The block diagram above is single conversion.  High performance receivers in the vacuum tube era went to double and sometimes triple conversion (I haven't personally seen quad conversion, but they might be out there).  Double conversion is still very common.  A very common technique today is to convert the entire receive spectrum, from 0.5 to 30 MHz up to around 70 MHz, filter and amplify, then downconvert to a lower frequency, sometimes in the low kHz.  For example, an Icom 7600, their last superheterodyne HF radio, upconverts to a “roofing filter” (no signal wider than that gets into the rest of the receiver) at 64.455 MHz then downconverts that to 36 kHz where the signal is digitized and all the signal processing is done digitally.

There's an application of the superheterodyne principle that has become widely adopted among hams and is now a frequently used architecture in commercial wireless systems.  Hams call it direct conversion, and the novelty of this architecture is that the IF is 0 Hertz – DC.  The architecture could be the same as previously shown, but in most ham uses, the mixer is the detector. 

In modern radios, the audio filter will usually amplify was well, so it's not a big change.  

So how does this work?  A common example might help: consider you want to make an amateur receiver with minimal power drain, and small enough to fit in a tiny package for backpacking or camping.  You want 40meters (7.0 to 7.35 MHz).  The local oscillator tunes that range (usually called a VFO for Variable Frequency Oscillator).  We know the mixer will give us the sum and difference frequencies, and we want to tune in someone transmitting Morse Code (CW) on 7.025 MHz.  If we tune to exactly 7.025, the receiver would hear nothing.  At this point, you could be using a multi-thousand dollar receiver.  Most of those  receivers offset the display from where you're tuned so that when the display reads 7.025 the LO is really offset from that, and the tone you hear in the speaker is from that offset.  They have a separate product detector, which is a mixer and Beat Frequency Oscillator, BFO.

Instead, we tune the VFO slightly off the exact frequency by tuning it to make a Morse code tone we like.  We've made our demodulator into a product detector and our LO is their BFO.  The problem is that you also hear the image frequency if someone is transmitting on it - the signal that's the same offset as the desired on the other side of the LO.  If your LO is at 7.024, you'll hear someone on 7.023; if your LO is at 7.026, you'll hear someone at 7.027.

In effect, you double the number of potential interfering signals.  This is called the Single Signal problem with direct conversion.  There are ways to reduce it, but it's always there.

The main drawback of the superheterodyne architecture is the complexity of having more parts in more circuits, so cost and complexity.  As a rule, whenever we introduce new circuit blocks and complexity, we have a tendency to fix some problems and introduce others.  In the vacuum tube era, receivers used crystal oscillators and VFOs as their LOs to change bands.  Starting primarily in the 1970s, with digital integrated circuits, those began to be replaced by frequency synthesizers (Phase-locked loops or PLLs).  When those were introduced, problems the existing circuits didn't have started to show up, and radios took a giant leap backwards until that was understood.  Similarly, when transistors replaced vacuum tubes, problems with strong signal handling surfaced that the high standing voltages or currents of vacuum tubes masked, and it took years to understand that, too.

A less obvious problem is that mixers, being the only deliberately nonlinear component in the radio, can introduce problems with hearing signals that aren't really there.  The image is not considered one of these.  The details are probably more of interest to designers than people just trying to learn about how receivers work but makers of mixers sometimes provide charts of all the undesired products their mixers will receive.  Double balanced mixers are better in this regard (better suppression of undesired) than single balanced, or unbalanced mixers, but the "spur table" has to be designed for.   Here's where I get to wave my hands and say, "that's beyond the scope of this article". 

Saturday, April 13, 2019

A Hundred Years of Food Prices Compared

From the Foundation for Economic Education, author Marian Tupy got motivated by a quote from Twit of the Year, Alexandria Occasionally Coherent, to compare food prices after a century of largely free market forces working in American agriculture.   It was the famous quote in which she babbled:
Capitalism is an ideology of capital—the most important thing is the concentration of capital and to seek and maximize profit... we’re reckoning with the consequences of putting profit above everything else in society. And what that means is people can’t afford to live.

“Capitalism is irredeemable,” she concluded.
I'm not sure if this is the talk where she said unemployment was low because everyone had two jobs.

I'm not sure how obvious this is, but it's tricky to try to compare prices from a hundred years apart.  We know, for example, that inflation has decimated the dollar, but that's hard to separate from that other price trends.  Appliances and electronics have gotten cheaper, for example, but everything where the market has been broken by the government has gotten more expensive; things like education and health care, and then products that are highly regulated, such as automobiles, homes and aircraft.  A 2019 car might cost far more than a 1969 car, but it has been regulated extensively in terms of gas mileage, crash safety, and a host of other things.

His bottom line conclusion is that food in America has become almost eight times cheaper relative to unskilled labor over the last 100 years.

Here's what he did.  First, he obtained a report called:
Retail Prices, 1913 to December 1919: Bulletin of the United States Bureau of Labor Statistics, No. 270, which was published in 1921. On pages 176-183, we encounter nominal prices of 42 food items—ranging from a pound of sirloin steak to a dozen oranges—as registered in the city of Detroit in 1919. Those can be seen in the second column of the attached graphic.

The next step was to derive the hourly wage for unskilled labor in 1919, using a 1774 to 2016 scale at and re-indexing it to 1919.  This gave a pay rate of 25cents/hr for unskilled labor in 1919.  Finally, 2019 prices for items as comparable as could be determined were obtained from - chosen because it was believed to be a place many unskilled laborers shop. For reference, the 2019 pay rate for unskilled workers calculated to be $12.70 per hour.

Their conclusions:
  1. The time price (i.e. nominal price divided by nominal hourly wage) of our basket of commodities fell from 47 hours of work to ten, 21.2 %, (see the Totals line in column five).
  2. The unweighted average time price fell by 79 percent (see the Totals line in column six).
  3. Put differently, for the same amount of work that allowed an unskilled laborer to purchase one basket of the 42 commodities in 1919, he or she could buy 7.6 baskets in 2019 (see the Totals line in column seven).
  4. The compounded rate of “affordability” of our basket of commodities rose at 2.05 percent per year (see the Totals line in column eight).
  5. Put differently, an unskilled laborer saw his or her purchasing power double every 34 years (see the Totals line in column nine).
I know that "Big Ag" gets its criticisms around the web, Bayou Renaissance Man ran one just nine days ago, but the results of this century long change in agriculture has led to quality of life improvements for all of us, here measured by the poorest among us.  Add in that American agriculture has brought food as foreign aid to many places around the world.  An average time price decline of 79% is a big improvement in life.  It means, picking one item from the second chart, that a pound of sliced ham went from costing 2.27 hours (2 hours, 16.2 minutes) in 1919 to costing 0.24 hours (14.4 minutes), so a full two hours of pay is freed.  It means that there's more money at the end of the week for other things.

What "factory farms" buy us consumers is the "Iron Law of Production", which says that as quantity doubles, price comes down by roughly 25 to 30%.  Larger farms can economically justify techniques that smaller farms can't.  Yes, I'm aware of the general protests against living conditions for livestock on these farms, and I'm sympathetic to some of it, but I see that as a first world problem.  Look at it this way: if you were a starving Venezuelan eating out of garbage can, because there are no zoo animals or pets left, do you eat the "factory farmed" ham or do you keep starving?   I think modern small farm techniques for raising livestock more humanely while keeping total costs down have a lot of room in the market.

That's getting lost in the weeds, though.  The bottom line is that the system AOC loves to hate has done very well for Americans, while the system she loves produces Venezuelans eating out of garbage cans.  Her quote that "...people can’t afford to live," due to capitalism is proven, demonstrably false.  People today have it much better than they did a hundred years ago.

I think author Marian Tupy hits it out of the park with his closing quote:
Joseph Schumpeter, the famous economist who served as Austrian minister of finance in 1919, observed that the
capitalist engine is first and last an engine of mass production which unavoidably also means production for the masses … It is the cheap cloth, the cheap cotton and rayon fabric, boots, motorcars and so on that are the typical achievements of capitalist production, and not as a rule improvements that would mean much to the rich man. Queen Elizabeth owned silk stockings. The capitalist achievement does not typically consist in providing more silk stockings for queens but in bringing them within reach of factory girls.
To those silk stockings we can now add food.

EDIT 2202 EDT 4/13/19: to add the Walter Williams image I forgot.

Friday, April 12, 2019

Use By Date Approaching

Ran into this cartoon Tuesday and better use it before its "Use By" date gets here.

(Steven Breen)

Read those as Dr. Seuss prose, it works better. Unfortunately, I was unable to find a referenced story, but the list of ideas California's state legislature is contemplating was discussed in a few places as recently as last weekend, IIRC.

I did find a couple of interesting stories that illustrate the problem.  At Town Hall, Austin Hill writes about a proposal to implement a "third income tax" to pay for education.  The existing two are their California state income tax and the Federal Income tax. 
But now California, with an average statewide unemployment rate of over 12% (in some regions the rate is over 20%) and a budget deficit of somewhere between $10 and $15 billion, is considering the imposition of a third income tax. The additional income tax rate would vary, according to which region of the state one lives in, and would be imposed directly by school districts and county governments.
See, back in 1978, California voters passed a thing called "Proposition 13" which capped the amount that property tax could be raised year over year.  That means California school districts are broke just like the state.  They can't do what many other school districts in America do, and just raise property taxes by leaps and bounds every year.  Under the proposal, school districts will gain the right to decide on their own income tax levels to charge people living in their district. 

Do you find it surprising that they're getting legislative support for this idea?  If so, are you considering the government employee unions, long since joined at the hip with the Democrat party would like some of that sweet taxpayer money, too?

To add a little more context, Victor Davis Hansen, writing in the Daily Signal points out a couple of things to note.
For over six years, California has had a top marginal income tax rate of 13.3%, the highest in the nation.

About 150,000 households in a state of 40 million people now pay nearly half of the total annual state income tax.
That last one is astounding.  That means 0.375% of tax payers pay half of the state's income tax revenue.  No wonder ordinary Californians keep electing the people they do.  Everything they vote for comes with the fact that someone else is paying for it.  It's OPM - Other People's Money - the most addictive substance in the world.  In addition:
  • California recently raised gas taxes by 40% and now has the second-highest gas taxes in the United States.
  • The state has the ninth-highest combined state and local sales taxes in the country, but its state sales tax of 7.3% is America’s highest.  As of April 1st, that sales tax is applied to any purchases from out of state merchants.  
  • Scott Wiener, a Democratic state senator from San Francisco, has introduced a bill that would create a new state estate tax. Wiener outlined a death tax of 40% on estates worth more than $3.5 million for single Californians or more than $7 million for married couples.  Since $3.5 million will essentially buy a cardboard box to live in in San Francisco, that will impact families who have passed a home down generation to generation.  The current owners will lose those houses.
  • In January, new governor Gavin Newsome proposed a tax on drinking water.  I'm sure that's an additional tax on water because I can't imagine people aren't already paying at least one tax on their water, if not several.   
It's no surprise that middle class Californians are moving out of the state.  I found it surprising that the rich are moving to California.  They're moving to the expensive cities and driving up housing prices which help force the lower income Californians out of state. 

I think California is working toward the Venezuela model of having a few rich people, mostly people living on handouts, and no middle class. 

Thursday, April 11, 2019

I Don't Care Who You Are - That Was Cool!

Mrs. Graybeard and I just enjoyed watching the second launch of a Falcon Heavy (FH) from historic pad 39A on the Kennedy Space Center.  There has been a great deal of buzz about this launch with some local sources saying they expected a couple of hundred thousand visitors to the immediate area around the KSC. 

You know, there's only a handful of places on Earth where you can see launches like this, and no other place to see a launch quite like this one.  The FH is the most powerful rocket at liftoff on Earth, but not the most powerful ever, at over 5 million pounds of thrust - the Saturn V that took our crews to the moon had a liftoff thrust of over 7 million pounds.  For every other launch besides a FH, you see a rocket lift off and disappear in the distance.  From a practical standpoint, once it's a minute or two into flight, you don't get a better view of a launch from as close as you can get to the pad as you do from anywhere along the coast.  With the FH, though, you can get this if you get close:

This view of the two side rockets from the Heavy landing is, of course, from the SpaceX video feed, but soon you'll be able to see somebody's home video from nearby.  (I just checked, no videos of tonight's launch yet, just one from a year ago).  The landing site isn't very close to anyplace tourists can get, but is closest (straight line) to Port Canaveral Jetty Park.  I've seen videos taken from Playalinda Beach at the north end of the KSC and they're not terrible.  

Unlike their test flight last February, this time SpaceX got the center booster recovered, too.  Due to the way the flight is structured, the center core doesn't shut down until the first two boosters have dropped, and the remainder of the rocket stack is too far down range for the center booster to make it back to the land.  This is why they have their autonomous recovery drone, and they stuck the landing this time.

The drone ship is so far off the coast that it's way below the horizon, so there's no chance of seeing that.  The booster landing area can be seen from nearby, but the only time we've ever seen the return burn was during the night launch for the Crew Dragon test.  We can hear the sonic booms of the returning boosters, though, and heard them tonight. 

Mrs. Graybeard and I settled on the "Space Coast" in the early days of the Shuttle program, 1982, and I had always wished that I had taken the few hundred mile trip from where I grew up in north Miami to see a Saturn V launch.  Stories I've heard from locals who were on the cape for one of those launches have only turned that into a haunting want.  We seem to be entering a new renaissance of space with plans on the book to get back to the moon relatively soon - whether that's on NASA's SLS, a private sector rocket or some combination is a decision that's still a long way out.  Maybe I'll get a chance to see a moon rocket after all.

Wednesday, April 10, 2019

One of the Oldest Electronics Jokes Sorta Comes to Life

In the early days of my life in electronics, today's ubiquitous Light Emitting Diodes (LEDs) were not very common; in fact, they were somewhat exotic.  Every April Fools Day, someone would pass around a joke about the new invention called the Dark Emitting Diode or DED.  I guess some jokes  never die

That's a long introduction to a story in Electronic Design magazine about a using an LED backwards to absorb light.  Light absorbing is sort of like Dark Emitting.  Sort of.  Making it either a DED or a LAD, but they start the story with "This is not an April Fools joke". 
Researchers at the University of Michigan (Ann Arbor) have used an infrared light-emitting diode (IR LED) with its electrodes reversed to cool another device just nanometers away.
The reverse biased IR LED is held 55 nanometers, about 2 millionths of an inch, from the device they intend to cool.

Linxiao Zhu shows the experimental platform that housed the calorimeter and photodiode. This system can damp vibrations from the room and building, steadily holding the two nanoscale objects 55 nm apart. (Source: Joseph Xu)
Reversing the positive and negative electrical connections on the IR LED makes it behave as if it’s at a cooler temperature than the ambient. The reverse connection not only keeps it from emitting light, but also can induce the LED to suppress the thermal radiation that it would be emitting.
ED makes it clear that the physics behind this is well established, and in use in other laboratory techniques.  They describe the very intricately crafted experiment that was needed to verify the effect, which depended on creating a tiny calorimeter (80 microns in diameter - .00315").  The source of heat and the LAD were separated by "a subwavelength nanometer-size gap", which is ambiguous, but earlier in the story mentioned a separation of 55nm, so assume around that.  
The researchers showed that when the calorimeter and reverse-biased LED are in each other’s near-field, with a vacuum gap between them of just tens of nanometers, this evanescent cooling occurs via two mechanisms. One is photon tunneling (which enhances the transport of photons across nanoscale gaps), while the second is suppression of photon emission from the photodiode (due to a change in the chemical potential of the photons under an applied reverse bias).
So what?  One of the pernicious problems in electronics is getting rid of the heat generated in small components.  One possibility for this cooling is to have coolers built into packaged components.  I can envision photonic cooling systems being built into more complex parts, perhaps going 3D in the package, like some memory chips are now doing.  In the published Nature article, they claim heat removal of 6 Watts per square meter.  Unfortunately - that number means nothing to me in terms of how effective this solid state cooling could be at cooling practical processors, memory chips, and other components.  It sounds awfully small though. 

The thing about the DED joke, though, as that in those days this would have seemed like something from Star Trek (The Original Series), like Spock himself had given it to us.  It would have been unimaginable. 

Tuesday, April 9, 2019

The Story of the Chinese Woman At Mar-a-Lago Takes a Nasty Turn

The story of the Chinese woman who was apprehended at Mar-a-Lago with, shall we say, suspicious items in her possession took a turn for the more sinister Monday.  Ars Technica brings a detail I've seen nowhere else.  First, some necessary background info for those not fully familiar with the story.
The already suspicious account of a Chinese national who allegedly carried four cellphones, a thumb drive containing malware, and other electronics as she breached security at President Trump's private Florida club just grew even more fishy.

The possessions in Zhang's hotel included five SIM cards, nine USB drives, yet another cell phone, and a signal detector that could scan an area for hidden cameras, according to reports widely circulated Monday. In addition to the electronics, Zhang's hotel room also contained more than $8,000, with $7,500 of it in US $100 bills and $663 in Chinese currency, The Miami Herald reported.
Zhang was in court Monday to decide if she gets bail.  The Feds argue that she's a flight risk because she has no ties to the US and (direct quote), "She lies to everyone she encounters."  None of this seem particularly weird.

The first thing that seems weird is that in addition to the "signal detector that could scan an area for hidden cameras" (probably something like the eBay "bug detectors" that receive on frequencies common cameras use) is the sheer volume of hardware she was carrying.  When she was first stopped, she was carrying two Chinese passports, four cellphones, a laptop computer, an external hard drive, and a thumb drive.  Back at the hotel where she was staying they found a fifth cellphone, five SIM cards, and nine more thumb drives.  $7500 in $100 bills and another $663 in Chinese currency seems like expense money.  The thing that stands out as really unusual is the particularly nasty malware on that thumb drive they grabbed a Mar-a-Lago.  According to Ars, quoting the transcript from the hearing:
Secret Service agent Samuel Ivanovich, who interviewed Zhang on the day of her arrest, testified at the hearing. He stated that when another agent put Zhang's thumb-drive into his computer, it immediately began to install files, a "very out-of-the-ordinary" event that he had never seen happen before during this kind of analysis. The agent had to immediately stop the analysis to halt any further corruption of his computer, Ivanovich said. The analysis is ongoing but still inconclusive, he testified.
I'm nowhere near expert on tradecraft and I couldn't tell you if this seems like she's a Chinese agent, a freelancer, or working for a domestic Democratic candidate.   It does seem like this is a bit more than casual.  A noteworthy exchange during the bond hearing went like this
Adler, Zhang’s attorney, pushed back during the hearing on the idea that she was a spy.

“She did not have the type of devices that can be associated with espionage activities,” he said.

Garcia, the prosecutor, replied that “there is no allegation [in the criminal complaint] she was involved in espionage ...”
Adler's line is stupid.  A pencil can be "associated with espionage activities".  Garcia saying, "we never said she was a spy" is also stupid.  Especially because he also said he wouldn't rule out charging her with that later, or "more serious charges."

This is the very beginning of the beginning; think page 2 of a 400 page novel.  I wanted to believe that agent Ivanovich's partner, the one who plugged the USB stick into a laptop, wasn't using just a regular agency laptop, but rather one that was air gapped to any other SS machine, and was to be used for this purpose.  However, he specifically said, "... had to immediately stop the analysis to halt any further corruption of his computer" and that quote doesn't go together with using a special computer designed for forensic examinations.

I'd like to think the Secret Service is not so dumb they're going to plug a piece of irreplaceable evidence that could contain anything into a plain agency laptop, but it seems like they did.  Jake Williams, a former hacker for the National Security Agency who is now a cofounder of Rendition Infosec, said on Twitter,  "As a taxpayer, I'm very concerned about where Agent Ivanovich's laptop is and where it's been since he plugged a malicious USB into it. If this was the Secret Service quick reaction playbook, perhaps Zhang planned to get caught all along (not joking)."
A Secret Service official speaking on background told Ars that the agency has strict policies over what devices can be connected to computers inside its network and that all of those policies were followed in the analysis of the malware carried by Zhang.

"No outside devices, hard drives, thumbdrives, et cetera would ever be plugged into, or could ever be plugged into, a secret service network," the official said. Instead, devices being analyzed are connected exclusively to forensic computers that are segregated from the agency network. Referring to the thumb drive confiscated from Zhang, the official said: "The agent didn’t pick it up and stick it into a Secret Service network computer to see what was on it." The agent didn't know why Ivanovich testified that the analysis was quickly halted when the connected computer became corrupted.
I've never seen a word about any computers being compromised at Mar-a-Lago, although I seriously doubt they would tell us.   Oh, and "they say" that the head of the Secret Service, Randolph ‘Tex’ Alles, stepping down has nothing to do with this. 

Again, it's very early in the story.  Everything we think we know is probably wrong.

Mar-a-Lago, White House Photo

Monday, April 8, 2019

NASA-Funded Mission Fires Two Rockets into Aurora, Injects Tracers

Late Friday night, two sounding rockets launched from a small spaceport in northern Norway.  The two small rockets soared to an altitude of  200 miles (320 km), and each released a visible gas intended to disperse through and illuminate conditions inside the aurora borealis. Some of the resulting images of the blue gas from the rockets interacting with the green auroras  were stunning.

The two rockets each had four capsules that released the gas in an impressive light show.  At 2226 UTC the first rocket had dropped is blue gas:

Five minutes later, at 2231 both rockets' clouds were reaching full density.

This was the AZURE mission:
This NASA-funded AZURE mission, which stands for Auroral Zone Upwelling Rocket Experiment, is one of a series of sounding rocket missions launching over the next two years as part of an international collaboration known as "The Grand Challenge Initiative – Cusp." The goal of these flights is to study the region where Earth's magnetic field lines bend down into the atmosphere, and particles from space mix with those from the planet.
After their launch, the two rockets ascended into space while onboard instrumentation measured the atmospheric density and temperature in order to determine the ideal time to release visible tracers—trimethyl aluminum and a barium/strontium mixture. These gas tracers were released at altitudes varying from 115 to 250km.
ARS Technica included this time lapse video of the experiment made by a guy who just stumbled across the experiment going on.  It's 18 seconds - watch it.  Then I bet you watch it again. 

I would have loved to have seen this!  Seeing the auroras has been on my bucket list for as long as I can remember.

Sunday, April 7, 2019

Radio Sunday #2 - Radios Get Sensitive, but Not in the SJW Way

We left off last time with a brief look at the crystal receiver, the simplest AM mode receiver you can build.  I also said (in not these few words) it's a simple toy.  I said, “It's modest on sensitivity, has virtually no selectivity and isn't really good for handling wide ranges of input signals.  But it's about as simple as it gets.” 

By the mid teens (19-teens), experimenters had realized circuits with gain (amplification – it “gains strength”) and had some knowledge of simple tuned circuits.  It led Ernst Alexanderson to develop what we call Tuned Radio Frequency, or TRF, receivers.  He patented it in 1916. His concept was that each stage would amplify the desired signal while reducing the interfering ones. Multiple stages of RF amplification would make the radio more sensitive to weak stations, and the multiple tuned circuits would give it a narrower bandwidth and more selectivity than the single stage receivers common at that time. All of the tuned stages of the radio must track and tune to the desired reception frequency.

The three stages of Radio Frequency amplification have a single tuning circuit consisting of a parallel inductor/capacitor with the capacitor tunable so the operator can resonate the inductor on the frequency the user wants to listen to.  This is a primitive method of filtering, and would be made more sophisticated in later years, but the “network theory” that enabled that hadn't been derived by 1915.  A parallel tuning circuit is high impedance (AC resistance) to ground at its tuned frequency, and closer to a short circuit to ground far below or above that frequency.  This means it minimizes the loss of the signal at only its tuned frequency while above and below this frequency the signal is shunted more to ground rather than conducted to the next stage.  The amplification solved the sensitivity problem with the amplifiers, but there isn't so much gain that the circuit is likely to oscillate (like a microphone amplifier squealing) from strong signals out of the third amplifier leaking back to the input of the first stage. 

The TRF did improve performance of the radio, but now a new, previously undiscovered problem existed: getting three separate tuned circuits (an electric filter) to tune over a fairly wide frequency range by tuning only one element in each is a difficult problem, even today.  When the circuit is tuned, the amount of rejection of nearby stations varies, getting proportionally more narrow as you tune lower and wider as you tune up in frequency.  Getting all three circuits to peak amplification was delicate and required the user to make fine adjustments.  That makes this architecture usable only for single frequency applications (or very narrow bands of frequencies). 

This is not meant to belittle Alexanderson's work: he succeeded in improving the sensitivity, selectivity and signal handling characteristics of the radio receiver. 

Today, TRF receivers are used in niche applications.  The first one I ever encountered was a special application in aviation called a marker beacon.  You probably know that there are established routes for aircraft into and out of airports.  On the landing approach, there are systems that allow automatic landing with three radio systems: the ILS (instrument landing system) localizer, which keeps the aircraft on the centerline of the runway; the Glide slope system that keeps it on a (typically 3 degree) vertical slope to the ground, and a set of Marker Beacons that point vertically to tell the pilot they're on the proper path and where the aircraft is: outer, middle and inner beacons.  These are all transmitted at 75.0 MHz, and aren't challenging to receive.  The signals are in a convenient range of amplitude and integrated circuit amplifiers available by the 1980s like MC1590 turned these into rather simple receivers.  The very first stage of these receivers tends to be a 75.0 MHz crystal filter that is narrow enough to keep nearby strong signals from hitting the detector, and the detector itself is just looking for three audio frequencies, so they are narrowband audio filters.

TRF receivers are also used in some Industrial Scientific and Medical (ISM) applications such as some remote door locks for cars and there are some systems that use it to synchronize “atomic clocks” - the ones that receive the broadcasts from WWVB or other countries' time/frequency standard stations. 

Something that's never talked about is that since they have no local oscillator, they have no emissions. They can't be detected by monitoring for the emissions from the radio.  They could be usable as a fixed-tuned receiver or for a small number of channels for something like local VHF communication on the six, four (non US) or two meter bands or similar.  Running all the stages at VHF will make the detector more challenging, but a small group that only wants to use one frequency (or a couple) might be able to make this work. 


Again, the TRF architecture was patented in 1916 and so I'd be sure he had a working model several months before that, possibly in 1915.  Before that, Edwin Armstrong developed the Regenerative Detector that allowed great improvements in sensitivity of the receivers.  Again, it relied on the gain of the triode. 

The trick about regeneration is that it applies feedback to the circuit to improve gain.  A short side trip is in order. 

It's helpful to describe feedback as positive or negative: positive feedback creates oscillation.  This is what you're hearing if someone puts a microphone in front of a loudspeaker.  Some little bit of noise gets into the microphone, it comes out of the speaker louder than it went into the amplifier, now the microphone picks up that sound, which goes into the amplifier and gets louder, and at the speed of electronics, the sound goes from barely audible to full volume screaming out of the loudspeaker.  For this oscillation to happen, the amplifier just needs the tiniest bit of gain more than 1.  Negative feedback is extremely useful in controlling the amount of amplification, by feeding back some of the signal shifted in phase so that it cancels out some of the input and keeps the output constant.  Negative feedback wasn't conceptualized until 1927 by Harold Stephen Black of Bell Labs.  Today virtually all amplifiers are designed around negative feedback. 

Here's Armstrong's circuit redrawn recently


L3, called the “tickler coil”, is in the output circuit of the tube so it's an amplified version of the input.  By adjusting the distance (which adjusts the coupling) between L3 and L1, the gain of the amplifier could be adjusted until just before it broke out into oscillation, so that any increase in signal made it start to oscillate.  At this point, it had maximum gain, and the Armstrong's detector was more sensitive than any existing receiver. 

Like the TRF, regenerative receivers are hardly used in commercial receivers because they're touchy and need to be futzed with to keep them working without breaking out in squealing oscillations. 

Hobbyists, though, still play with regenerative receivers just as a fun toy.  (Example 1, example 2)  They have the unique property that by adjusting feedback just below the oscillation, they're a sensitive AM detector, but if put into oscillation, they can demodulate Single Sideband and CW (Morse code transmissions).  These are detected in conventional modern radios with a circuit called a product detector, which uses an oscillator (called the Beat Frequency Oscillator) running on a precisely chosen offset frequency.  With the offset frequency not set quite right, Single Sideband (SSB) has often been said to sound like Donald Duck – I honestly haven't played with regenerative receivers enough to know if they handle SSB approximately correctly.  Regeneration has largely been replaced with other techniques, even in amateur homebrew gear.

Saturday, April 6, 2019

As A "Real Money" Guy, This Drives Me Nuts

According to some news reports yesterday, Trump was saying he wanted the Federal Reserve to bring back quantitative easing. 
“I would say in terms of quantitative tightening, it should actually now be quantitative easing,” the president said. “You would see a rocket ship,” Trump asserted, shortly after the government released employment data for March. The report showed 196,000 jobs added in March and the unemployment rate staying near a 50-year low of 3.8%.
Quantitative tightening is the the Federal Reserve trying to clean up their balance sheet from all the havoc they created during the Obamanation.  CNBC offers this balance sheet.  I think it's too low, perhaps half the actual, but it's all we have.

At the far right, where the green blob starts to come down, that's the effects of their efforts at quantitative tightening in the last year, which may well be over with.  If not, they need to wind some of those securities they're holding to have any response available for the next crash.  

I think that Trump feels the sting of Obama and his acolytes claiming Trump took over Obama's recovery and thinks the Fed made Obama look good so they should do the same for him.  "They did QE 1, 2 &3 for him, why not do QE for me?"

I would hope that saner heads would tell him that QE was an "extraordinary measure" to meet the 2008 crash and the economy is still feeling the bad effects of the Fed's actions.  I've read he's nominated both Herman Cain and Stephen Moore to be Fed governors.  Cain was the Chairman of one of the Federal Reserve Banks, the Kansas City bank's Omaha, Nebraska branch back in '89-'91, so not exactly a real insider, but not exactly an outsider either. Steven Moore is a "visiting fellow" at the Heritage Foundation, and has a long history in economics.  He's a founder of the Club for Growth and the Free Enterprise Fund.   Both of those claim to be advocates of "economic growth, lower taxes, and limited government".  I don't actually know if he's a "real money" guy but I'm rather sure Cain isn't.  Still, either one of those guys, but especially Moore, should be a "saner head" to school the president.

While the president has done a lot that I approve of, like his deregulation, making progress at cleaning up the VA, and his handling of some problems like Isis, but during the campaign they got my hopes up with the talk of his advisor Dr. Judy Shelton advocating effectively going to a gold standard by providing something like the Treasury Inflation Protected Security (TIPS) bonds that would be redeemable in either dollars or gold.  For a brief moment there was hope the Federal Reserve might be severely restricted or even shut down.  Not that I didn't see the handwriting on the wall within days of the election when he picked "Government Sachs" veteran Mnuchin as treasury secretary.

Yeah, he's better than Hillary would have been, but I feel pretty confident my cat Moe would have been better than Hillary, too, not to mention various parrots and golden retrievers I've met over the years. 

Friday, April 5, 2019

NOAA/NWS Say We're Not Heading into a Maunder Minimum

Back in November, I presented notes on a talk by Dr. Valentina Zharkova (with a video of the talk) in which she advances the prediction that by the 2030s solar activity will be at a level not seen since the Maunder Minimum and the Little Ice Age.
That's the prediction Dr. Valentina Zharkova advances in a presentation of her "Climate and the Solar Magnetic Field Hypothesis" presentation at the Global Warming Policy Foundation in October, 2018.
According to Dr. Zharkova, the deep solar minimum that we appear to be going into is expected to last from "2020 to 2053". 

The NOAA/NWS scientists who forecast solar cycles issued their own predictions today (Hat Tip to Watts Up With That) and they disagree with her.   
“We expect Solar Cycle 25 will be very similar to Cycle 24: another fairly weak cycle, preceded by a long, deep minimum,” said panel co-chair Lisa Upton, Ph.D., solar physicist with Space Systems Research Corp. “The expectation that Cycle 25 will be comparable in size to Cycle 24 means that the steady decline in solar cycle amplitude, seen from cycles 21-24, has come to an end and that there is no indication that we are currently approaching a Maunder-type minimum in solar activity.”

Image used at WUWT credited as from the Twitter feed of NOAA's Space Weather Workshop.

I've never seen this style plot from the Space Weather guys but it's an intuitive way to show where they expect data from the next 11 years (until 2030) to fall.  Perhaps a bit easier to read than a plot with error bars.  Plus, it's better than just copying the cycle 24 activity which seems like what they did.

The inset box at the top right shows their prediction for the minimum of the current cycle (24) to come some time between the midpoint of 2019 (this June) and 3/4 of the way through 2020 (August) and they're predicting the maximum to occur in the window of 2023 through 2026.  

What the Space Weather Prediction Center doesn't really tell us is what this prediction is based on, making it essentially impossible to compare their predictions to Dr. Zharkova's.  Dr. Zharkova is very up front about her prediction methods, in fact, they're what the entire presentation is about.  Her work is on the solar dynamo, the magnetic fields that create virtually everything we see on the sun.  It began by observing the sun and attempting to come up with a model to explain the patterns we see.  She then applied modern digital signal processing methods (my words, not hers) to find "principal components" and their periods.   Her first paper that I became aware of in '15 (still on Nature) was based on two principal components while her later paper appearing this past November increased that to four principal components which should increase the accuracy of her predictions.  As it is, she was one of only two that correctly predicted solar cycle 24 would be weaker than cycle 23 - two out of 150 models predicted this.

The only reservations I have about her work is that it's based on a small interval in time, cycles 21-23 (about 33 years).  I go into more details in that November '18 piece (first link).  The Space Weather Prediction Center puts up nothing like her work for comparison purposes.  For predictions of a few cycles, Dr. Zharkova's work has been rather successful and matches the sunspot record back to before the Little Ice Age. 

Who's right?  Do we have another little ice age, or does the weather stay like the last 10 years for the next 10?  Guess we just have to wait and see. 

Thursday, April 4, 2019

WiFi 6 vs. 5G - Coming Soon to a Computer Near You

The coming next generation of cellphone data services, 5G, has gathered lots of attention.  So much attention that there are already lawsuits over Verizon and AT&T advertising they have 5G even though nobody really does.  I hope it's obvious that 5G means "Fifth Generation", as opposed to the currently widely distributed 4G (fourth...), LTE (long term evolution) and so on.  For reasons I don't really understand, 5G is gathering much more hype - probably because it's coming at the same time as the "Internet of Things", IoT, (as I call it, the Internet of Things That Don't Quite Work Right - IoTTDQWR) and the IoT is gathering lots of suspicion as reporting everything you think do or say to some Them.

Electronic Design magazine's newsletters treat us to an interview with Cees Links of the Netherlands, one of the original inventors of WiFi.  With the title, "Wi-Fi 6 vs. 5G: Why Trying to Pick a Winner is Missing the Point", it's really not about which makes the better network so much as talking about making the networked radio experience better all the way around.

Perhaps you've missed the buzz about WiFi 6 coming, so the Verge has a good introductory article on some of what's different.  Like 5G, it will be "real soon now", probably by the end of this year, because the specification doesn't appear to be released yet.  

This is where I find Cees Links' perspective good to read:
Of course, some of the messaging around 5G is just marketing hype, showcasing the favorable points and ignoring the less favorable ones. The claim is that 5G with 4 Gb/s will be faster than Wi-Fi (.11ac) with 1.3 Gb/s. The immediate counter argument is that Wi-Fi (.11ax) with 9.6 Gb/s will be faster than 5G.

But will these speeds be achieved in real life? We’ve seen this before, these glossy promises of high-speed access being wiped away by the hard truth of “but I still cannot get a decent connection in the basement,” or something similar, because—and here’s the real headline—how good will 9.6 Gb/s Wi-Fi be in the basement, if the connection to the home is 300 Mb/s, or even less? It seems like we’re working on the wrong issues, doesn’t it?
To begin with, let's get something out of the way.  5G isn't going to get all new spectrum and while they do go into chunks of spectrum previously unused by phone data links, 5G also reuses other frequencies the older generations use.  This chart from Keysight Technologies, a long time test equipment maker, is color coded:

The purple numbers are 3GPP or LTE frequencies repurposed for 5G and the teal numbers are new 5G frequencies.  The important points are the spectrum reuse, that many of the 5G frequencies are currently in use, and to point out that those numbers on far right go as high as 40 GHz.  These frequencies don't penetrate either the atmosphere, walls, or leaves on trees, and other bits of the environment very well.  Yes, running more power can help, but they can't run unlimited power due to safety regulations and the extra power just doesn't buy much. 
Better coverage inside the home is one of the key characteristics of the new generation of Wi-Fi, now called Wi-Fi 6 (based on the IEEE 802.11ax standard). The distributed concept behind this new version of the Wi-Fi standard (also called Wi-Fi mesh) helps to distribute internet to every room in the home, with the main router at the front door, and small satellite routers (also known as repeaters) on every floor and in every room. This enables internet service providers to sell and support solid internet connectivity everywhere in the home—all good news!
As a consumer, I think I'm pretty typical in that I don't care what radio protocol I'm using, WiFi 6 or 5G; I just want it to work right.  Anywhere - whether that's here at my desk or if I need to communicate from the side of a road at some point.   The drawback to all this is that I need to go buy new hardware and I think I'm a pretty typical consumer in not wanting to do that.  There are no 5G cellphones yet (and my bet is that it will be years before 5G gets here to the small town).  Likewise, there are no WiFi 6 routers yet - the 802.11ax specification is expected to be implemented by the end of the year.  It seems to me WiFi 6 will be here before 5G, but that would require I buy a router, then repeaters and then things that go on the other end of the WiFi link. 

Cees Links raises some interesting points about the cellphone industry coming from a background of the heavily regulated telephone industry, selling frequency access while the WiFi industry comes from a background of using unlicensed frequencies.  When the cellphone providers think of providing data services like 5G, it's like bringing the phone twisted pair to your house: what you do in the house is up to you.  Likewise, the WiFi provider thinks of selling you a router and then you make everything work.  Nobody has done a good job of integrating the router and the cellphone data stream.  That's probably what consumers want more than disjointed pieces of hardware and data services that just don't seem to work right.

Wednesday, April 3, 2019

Receivers and Other Radios - Part 3

This is offered as part 3 of my (long dormant) The Least You Should Know series.

This is going to deal with things that are helpful to understand when comparing different radios because the things I'm going to talk about are both descriptive and quantitative.  I'm going to start with the quantitative.  I know this is going to turn off a lot of people because they stopped thinking about math long ago.  What I'm talking about, though, is all knowing the buttons to push on your calculator, not solving nasty problems.

For example, let me start with the most useful thing.  Radio signals cover a tremendously large range of voltages.  Your $25 Chinesium handie talkie might give you audio you can understand on 2 meter FM with the antenna providing 0.25 microvolt - 1/4 of 1 millionth of a volt.  On the other hand, if someone is standing next to you, they could put a couple of hundred millivolts into the radio, nearly a million times more voltage.  To handle those numbers conveniently, we take ratios of the powers and call them decibels.
A decibel (dB) is a ratio of two powers, 10*log (P2/P1) 
You might recall Ohm's law says that power is Voltage squared divided by resistance or P=V2/R
If you take 10*log ((V22/R)/(V12/R) that turns into 10 * Log (V22/V12) if R is the same in the numerator (top) and denominator (bottom).  In radio circuits, it's not uncommon.
By the rules of how exponents work with logarithms, that turns into 20 * log(V2/V1).
There's a gotcha hiding here: if the resistor values aren't the same, you can't calculate the dB difference by 20*log(V2/V1), you have to calculate both powers and go back to the 10*log(P2/P1).

The different disciplines in electrical engineering often have their own favorite units.  Radio designers have long taken the special case where the reference is 1 milliwatt (mW) or .001 Watt, and the unit is called a dBm, "dee bee em".
10 * log(.001/.001) =  10 * log(1) = 10 * 0 = 0 
1 milliwatt is 0 dBm
To a receiver, 0 dBm is BIG signal, yet if you're tuning around the shortwave bands, I've seen signals that big in the Shortwave Broadcast band around 5 MHz.

What about turning a voltage like that 1/4 microvolt (often crudely written uV) into dBm?  It's a couple of step process.  Here, I'm going to use "calculator talk", which is to use the EE key (Enter Exponent) that a lot of calculators have.  First we turn that voltage into power by V2/R, then turn that power into dBm.  The input of many modern receivers, especially broadband receivers, is 50 ohms and as close to a pure resistor as they can manage.
P = (0.25EE-6)2/50 = 1.25EE-15 Watts.
10* log(1.25EE-15 / .001) = -119 dBm
A shortwave receiver might have to handle a range of signals from those that are down in the range of a few microvolts, 5uV is -93 dBm, up as much as 10 milliwatts, +10 dBm.  Commercial receivers will give a good usable audio out at -110 dBm, making their operating range 120 dB wide.  That's a ratio of 1012 or 1 Trillion.

Going back and forth between volts (micro-, milli- and so on) to dBm is tiring and tedious.   That's why radio engineers tend to stay in dBm; or at least stay in either power units or voltage units.  You either get a little piece of software for your calculator or other thinking device, or get a table (pdf) like this.

Another useful thing to know about dB relationships is how they correspond to linear ratios:

For power ratios, every 3 dB doubles the power, so adding 3dB to 3dB doubles 2x to 4x the power.  Adding 3dB to the 6 doubles it again from 4x to 8x the original power.  For negative ratios, it's 1/whatever the chart says, so -3 dB is 1/2 the power, -6 dB is 1/4 and so on.  Remember our 20* log definition?  The ratios in that little chart are for power, which means that a 3 dB ratio looks like 2 x power, or 1.41 x voltage.  You can see these things in the chart. The important part is that there's no such thing as a voltage dB and a power dB.  A dB is a dB, the difference in voltage just looks different than the difference in power. 

When we talk about sensitivity and selectivity, these are usually expressed with dB ratios.  The sensitivity might be expressed as dB of signal to noise ratio (SNR) or the slightly more difficult signal to noise and distortion (SINAD).  If one radio claims a 10 dB SNR at -107 dBm and another claims it at -110, that second one is 3 dB better, which is twice as sensitive.  Which may or may not be meaningful for your use.  Selectivity might specified that at some frequency offset from where you're tuned, and interfering signal is attenuated (made weaker) by 40 or 60 dB. 

EDIT 4/15/19 1045EDT: removed incorrect references to dBm only being defined as such in 50 ohms.

Tuesday, April 2, 2019

Plastic Bag Bans - Hurting the Environment and Making People Sicker

New York state joined California in banning plastic shopping bags, trying to force shoppers to use those oh-so-fashionable cloth bags all the trendy people are using.  They become only the second state to legislate this silliness.  Hans Bader, writing in Liberty Unyielding brings the story, starting off with some world class snark.
As Daniel Frank sarcastically notes, “Reusable tote bags” can “cause food poisoning but at least they’re worse for the environment than plastic bags.” He cites Jon Passantino of BuzzFeed News, who observes, “Those cotton tote bags that are so trendy right now have to be used *131 times* before it has a smaller climate impact than a plastic bag used only once.”
Plastic bags make up somewhere between 0.5 and 1.0 % of the waste stream, depending on whose data you look at.  To eliminate that 1%, the trade off is taking your plastic grocery bags to a recycling place (probably near the front door of your grocery store) or buying and maintaining the cloth bags - which means washing them and disinfecting them after every use.  Naturally, companies like ChicoBag want you to buy their product instead of using plastic bags, and as part of pushing for that have committed some rather unsettling crimes. The industry trade group sued ChicoBag.
That was illustrated by a 2011 legal settlement between plastic bag makers and an importer of reusable bags, ChicoBag. The plastic bag makers sued ChicoBag for its use of false claims about the recycling rate and environmental impacts of plastic grocery bags in its promotional materials. (Those false claims are also the basis for municipal bans and taxes on plastic bags.) Under that settlement, ChicoBag was required to discontinue its use of its counterfeit EPA website and make corrections to its deceptive marketing claims, which had included sharing falsified government documents with schoolchildren. It was also required to disclose to consumers on its website that reusable bags in fact need to be washed.  [Bold added - SiG]

Reusable bags “are a breeding ground for bacteria and pose public health risks — food poisoning, skin infections such as bacterial boils, allergic reactions, triggering of asthma attacks, and ear infections,” noted a 2009 report.  Harmful bacteria like E. coli, salmonella, and fecal coliform thrive in reusable bags unless they are washed after each use, according to an August 2011 peer-reviewed study, “Assessment of the Potential for Cross-contamination of Food Products by Reusable Shopping Bags.” [ Note: dead link]
Central to the claims of the anti-plastic bag companies and organizations is that plastic bags are dangerous in the environment.  When you consider that the United States is responsible for about one percent of plastic waste entering the oceans, and plastic bags are less than 1% of that 1%, we start seeing how small the problem actually is, yet they're willing to make people sick over them.  The other argument is that plastics persist in the environment for long periods.  Que the ominous claims of the Great Pacific Garbage patch, more accurately but less spectacularly known as that "spot with a higher density of partially biodegraded micro plastics". 
Among the inaccurate claims that ChicoBag could no longer make after the settlement is one that contrasted the environmental impact of plastic versus reusable bags. Contrary to ChicoBag’s previous claims, a study done for the U.K. Environmental Agency showed it would take 7.5 years of using the same cloth bag (393 uses, assuming one grocery trip per week) to make it a better option than a plastic bag reused three times. See “Life Cycle Assessment of Supermarket Carrier Bags,” Executive Summary, 2nd page. As an earlier report on the subject noted (see p. 60):
[A]ny decision to ban traditional polyethylene plastic grocery bags in favor of bags made from alternative materials (compostable plastic or recycled paper) will be counterproductive and result in a significant increase in environmental impacts across a number of categories from global warming effects to the use of precious potable water resources. … [T]he standard polyethylene grocery bag has significantly lower environmental impacts than a 30% recycled content paper bag and a compostable plastic bag.
As the UK Environmental Agency pointed out in July 2011, a “cotton bag has a greater [harmful environmental] impact than the conventional [plastic] bag in seven of the nine impact categories even when used 173 times. … The impact was considerably larger in categories such as acidification and aquatic & terrestrial ecotoxicity due to the energy used to produce cotton yarn and the fertilisers used during the growth of the cotton” (see p. 60). Similarly, “Starch-polyester blend bags have a higher global warming potential and abiotic depletion than conventional polymer bags, due both to the increased weight of material in a bag and higher material production impacts” (see Executive Summary).
Unfortunately, I'm unable to look at any link in the last three paragraphs to check them for more details. 

Image Credit: Pixabay