Special Pages

Thursday, November 30, 2017

Today Ended the 2017 Atlantic Hurricane Season

Lately, I've been letting out a bit more of a sigh of relief when hurricane season is over.  It also usually coincides with a second "rainy season" here.  Unlike the summer rainy season, which is more predictable in hitting during the afternoon, and more like every day, the winter rainy season is rain that comes in advance of a cold front.  Winter rain can be any time of day and more like one or two days a week.   

So how did it stack up against NOAA predictions?  NOAA did pretty well this time in this regard:


They got in the right range in number of named storms, and both Hurricanes and Major Hurricanes are off by the same storm.
Based on the Accumulated Cyclone Energy index, which measures the combined intensity and duration of the storms during the season and is used to classify the strength of the entire hurricane season, 2017 was the seventh most active season in the historical record dating to 1851 and was the most active season since 2005.
A couple of milestones occurred this season: the almost-12 year stretch without a major hurricane making landfall in the US was broken by Harvey in Texas.  All in all, three major hurricanes made landfall in the US.  Besides Harvey there was Irma here in Florida; and Maria on Puerto Rico. 

Watts Up With That reports lots of interesting little details from NOAA, but I'll leave that to those  interested.  My main concern is the quality of the forecasting and I'm of the opinion that they aren't much better than they were five or ten years ago.  No, I can't quantify that.  I don't have metrics to do so.  I posted some of my discomfort with the way forecasts are made back after Irma.  A month before that I examined one potential storm in particular, and how bad the long range forecasts on it were.

There's a very tough reality here.  From everything I can see, the hurricane center simply is incapable of forecasting the path accurately enough for us to evaluate the risk a couple of days out of the storm.  In the case of a storm like Irma, they're incapable of forecasting the path accurately enough for someone to decide to get out of the Florida peninsula from Miami area when they still can (two days in advance)

The winds to be avoided in a hurricane are within the first few miles around the eyewall.  Even in the strongest storms, hurricane force winds don't cover 50 miles, it's more like 10-30 miles.  We have evidence that hurricane conditions in Irma, when she crossed the Keys as a Cat IV storm, were on the order of 20 miles wide, including the eye.  We have evidence that when Irma crossed Naples it wasn't even a hurricane.  Hurricanes are chaotic, though, and it's very common for them to cause tornadoes as the winds sweep onshore.  There are many aspects that appear to be fundamentally unpredictable.

To borrow a quote from myself back in this August:
Long time readers may recall that last October, within 24 hours of closest approach, the NHC forecast Hurricane Matthew to be over my head as a Cat IV storm. Actual closest approach was about 50 miles away and a much weaker cat II. We didn’t get hurricane force winds. That’s an enormous difference in the risk from the storm, since wind damage scales as velocity squared.  I'd like to see them more accurate at 24 hours, let alone at 10 days.
I'll go easy on them.  At 48 hours out, I want them to peg the center to within +/- 5 miles.  It's not like they don't have the most advanced supercomputers known to man at their request, right?  Do you think they can do that by 2050? 




Wednesday, November 29, 2017

Why First World Countries Have Third World Cities

So goes the clickable title from the Foundation for Economic Education, FEE.

TL:DR version: it's what you think.  These places are run by corrupt leftist politicians.  There's too little freedom.  They enact laws to tax and control everything, hurting economic freedom. 
The FEE article does a more thorough analysis of the story relying heavily on research by an economist here in Florida, Dean Stansel, of Florida Gulf Coast University:
There is a wide consensus amongst economists that economic freedom largely determines the wealth of nations and metropolitan areas are no exception to this rule. As Economist Dean Stansel, in his paper, An Economic Freedom Index for U.S. Metropolitan Areas states, “higher levels of local economic freedom are found to be correlated with positive economic outcomes.”
...
Both Baltimore and Detroit make it into the top 5 cities with the highest tax burdens, according to the Office of Revenue Analysis. As for New Orleans, Louisianans face the third highest combined state and local sales taxes, as well as excessive levels of deficit spending. These three cities are also plagued by excessive and even bizarre occupational licensing laws. Louisiana licenses florists, Detroit licenses hair-braiders, and Maryland counties license fortune tellers. If only Maryland’s licensed fortune tellers could have predicted that big government would cause businesses to flee these cities.
The truth, of course, is that the US is sprinkled with third world cities. Remember the story about Seattle removing the laws against public defecation and the problems that caused?  It's going on in more places.

Of course, if you're a reader here, you just might be a firearms enthusiast and you know that Detroit, Baltimore, and New Orleans are rife with violence.
Per 100,000 people Detroit’s gun homicide rate (35.9) is just shy of El Salvador’s rate (39.9), Baltimore’s rate (29.7) nearly matches that of Guatemala (34.8), and if New Orleans were a country it would have the second highest homicide rate in the world (62.1) – behind Honduras (68.3) and well ahead of Venezuela 39.9. Incidentally, these three cities have some of the strictest gun laws in the country.
As you might expect, how economic freedom is measured is up for discussion and Stansel lists both the most and least free cities.  Of the ten most free cities, seven are in Florida; mostly smaller cities, not the big blue cities of Miami, Ft. Lauderdale or Orlando. 



Tuesday, November 28, 2017

This is Kinda Cool, But It Also Kinda Creeps Me Out

A couple of weeks ago, the FDA just approved the first medication with an embedded sensor to verify the pill was taken.
This week the FDA approved a new technology geared toward patient compliance in the form of a prescription pill with a digital sensor embedded in it that lets doctors digitally track just how often a patient is taking his or her medication. The sensor was developed by Proteus Digital Health, a technology company centered around developing what it called “digital medicines,” that combine sensor technology and pharmaceuticals to improve patient outcomes.
The system is being introduced in the drug Abilify, and antipsychotic drug.  This is the kind of drug that must be taken regardless of how good the patient feels, and I'm under the impression this might lead to poor compliance.  "They" say that getting patient compliance for people to take their medications is a significant problem for doctors.  This sensor is a simple integrated device that is powered when stomach acid provides the electrolyte to activate a microscopic battery in the pill.  It doesn't really transmit like a miniature radio, it turns the power on and off in a digital signal that's picked up like an EKG will pick up the electrical activity of the heart. The on/off code includes the medication and some
According to a study published in 2015 by Proteus engineers and researchers in IEEE Transactions on Biomedical Engineering, Proteus' sensor, the IEM consists of three layers: an active layer, a 1 mm × 0.3 mm CMOS chip, and an insulation skirt layer, meaning the chip is sandwiched between a layer of magnesium on one side and copper on the other. Thompson reported the IEM silicon wafer as measuring 800 × 300-µm.

After it is swallowed, the sensor comes into contact with the patient's stomach fluid, creating an electrochemical reaction that powers the chip until the electrode materials are fully dissolved. The IEEE study estimated the power at about 1.85 V. Proteus engineers looked at other means for powering the device, such as using electrolyte fluids, however they found the magnesium/copper combination was optimal for biocompatibility (meaning it's safe to ingest), power output, cost, and compatibility with the manufacturing process.

In essence, the sensor is not a mini WIFI, Bluetooth, or radio antenna – it's a detectable power source. The electric signal transmits a binary number that represents the medication and its dosage. The code is stored in the integrated circuit, which modulates the current. The device's insulating skirt shapes the electric field produced by the electrochemical reaction and propagates it through the surrounding tissue, where it can be detected by a skin-worn patch (The MyCite Patch), which records the date and time of the ingestion as well some patient vitals it detects on its own, and can store them on the MyCite's accompanying smartphone app. Using the smartphone app patients can choose who has access to their records, allowing family members and doctors access to check in on them if need be. According to the IEEE study, the electric field emitted by the IEM is similar in nature to ones that occur naturally in the body in the brain, heart, and gastrointestinal tract.
Does it work?  Apparently well enough for the FDA to approve it.  They say the time delay between taking the pill and when it generates a signal isn't well controlled, and sometimes it never generates the signal the pill was taken. 
The IEEE study notes that the power and strength of the signal can depend on a number of factors such the amount of food or even other medications in the patient's stomach. According to Proteus it can take anywhere from 30 minutes to two hours after Abilify MyCite is ingested for the patch to record a signal. And the company admits it is possible that a signal won't be picked up at all.
The thing is, if the purpose is to ensure compliance and that patients take their meds, neither the FDA nor Proteus say that it has been demonstrated to do so.  Because of the long and apparently uncontrollable response time, the FDA specifically warns against using it in emergencies or when critical real-time tracking is required.  It seems to me that if the pill is to be taken with meals (or on an empty stomach) and the system can't tell that, it's not very useful.

So what's it good for?  That's the part I can't answer.  It seems to work in the sense of a statistical audit: a random sample to ensure the pills were taken at some point, but it can't tell for sure that Pill #36 was taken on day #36, if that matters.  When it works, it can tell that, but apparently the system working isn't something we can count on.

The the idea of ingesting a pill that can track you raises privacy and ethical concerns, particularly if the technology advances beyond the binary approach (took the pill/didn't take the pill) of the Abilify with MyCite .  The paranoid among us can easily envision being given drugs to modify behavior and the system monitoring us to ensure the pills aren't simply being flushed or dropped in the garbage. As Design News speculates, it's easy to envision a system like the MyCite sensor/patch being integrated with your smart home/Internet of Things That Don't Quite Work Right systems.  Your home would remind you it's time to take your medications and the MyCite would confirm you took them.   

The aspect that has always puzzled me is that no one seems to be asking why patients aren't taking their pills and addressing that root cause.  I've heard that an accepted reason is that patients are being given pills that cause memory issues and that's why they forget; if they're off those pills long enough, their memory improves enough to take those pills.  Perhaps they're skipping the drug because it gives them a terrible side effect they'd rather not endure, but they've never told the doctor or the doctor didn't address it to their satisfaction.  While I realize that some percentage of patients get side effects from any drug, there doesn't seem to be a concerted effort to produce drugs that have fewer side effects.  I'm sure designing drugs with fewer side effects is a difficult problem.  So was going to the moon and about a bazillion other engineering problems.  Among the most horrific things I can imagine is being given a drug that's supposed to help some issue and ruins my ability to think along with the intended change. 

Illustration from Proteus in Design News.  Note that the flow chart says "if the MYCITE APP does not indicate that the ABILIFY/MYCITE tablet was taken, do not repeat the dose."  That's tantamount to saying that the system is useless.  The article concludes with the statement that the drug company is doing a limited roll out of Abilify MyCite. The company said this is a deliberate move to allow it to focus on learning more about patients' experiences with the pill and to allow for ongoing feedback before a larger market release.



Monday, November 27, 2017

Your Feel Good Story of the Day

Story from WGN9 in Chicago by way of PJ Media.

Kate McClure, 27, of Bordentown, New Jersey, was driving into Philadelphia on Interstate 95 last month to visit a friend when her car ran out of gas.  McClure pulled over and got out of the vehicle to walk to the nearest gas station.  That's when she met Johnny Bobbit, Jr., a homeless ex-Marine who happened to stay near where she ran out of gas. 
Bobbit, however, was having none of it. He knew the neighborhood was a dangerous place for a woman to walk alone:
[Bobbit] told me to get back in the car and lock the doors. A few minutes later, he comes back with a red gas can (and) his last 20 dollars to make sure I could get home safe.
This is a serious act of generosity from anyone. But Johnny Bobbit is homeless, and that last $20 may have meant all sorts of important things to him. He gave it to someone in need, even though most of us would have seen Bobbit as the one in need.

Perhaps unsurprisingly, Bobbit is a former Marine.

He fell on hard times due to drug use and money problems, but he remembered how to act like a Marine.
The articles don't say how long it took for them to get to know each other, but it does say that McClure said she didn't have money to pay Bobbitt back on the night he helped her.  She returned to the spot where he sits several times offering him a few dollars or supplies each time.

At some point, McClure got the idea to put the story up on GoFundMe, hoping to help him over the rough patch in life he had been going through.  She originally wrote on the GoFundMe page:
I would like to get him first and last month’s rent at an apartment, a reliable vehicle, and 4-6 months worth of expenses. He is very interested in finding a job, and I believe that with a place to be able to clean up every night and get a good night’s rest, his life can get back to being normal.
And then Americans did what Americans do.  With a goal of raising $10,000 to help out Johnny Bobbit, so far the effort has raised $382,000.  It will probably gather more contributions.
.
Johnny Bobbit (L), Kate McClure and her boyfriend Mark D'Amico. 

Based on McClure saying that she didn't have $20 that night to repay Bobbit, I'm guessing she's not exactly "made out of money" either, but there's no indication she's doing anything other with the GoFundMe proceeds than do her best to help out the homeless Marine. 

Across the Northern Hemisphere we've begun the week having the earliest sunset of the year.  The longest night of the year is a few weeks away.  It can seem like a dark cold world.  This story is a little ray of sunshine and warmth that makes me feel good about people. 


Sunday, November 26, 2017

Autonomous Cars - Part 5

One of the obvious ways to ensure that cars on the road avoid each other and cooperate with each other is to have them communicate with each other.  This is an area that's getting a lot of attention among radio suppliers: Vehicle to Vehicle (V2V) and Vehicle to, well, just about anything: infrastructure, lights, traffic sensors and more.  Together, this all gets wrapped up in what's being called V2X.  There's just one problem: there's more than one way to do it and the industry hasn't chosen one.
The ITS-G5 technology, which is based on the modified IEEE 802.11p WiFi standard, is opposed to the C-V2X, which is based on the 3GPP standards. Although BMW, one of the “inventors” of the V2X technology, has already moved to the C-V2X camp and industry heavyweight Qualcomm recently launched a reference design that can be regarded as a clear commitment to C-V2X, the dispute has not yet been resolved. There are still good reasons for ITS-G5, as shows our interview with Onn Haran, co-founder and CTO of chip company Autotalks. The company is regarded as one of the pioneers of V2X technology.
As is often the case, these technologies aren't compatible and will interfere with each other.  That means there can be only one.  It becomes a high stakes game of each group developing their products and pushing to get their system mandated.  They push government agencies, not just here, but worldwide.  This interview is from EENewsAutomotive in Europe, but the same thing is going on here.

Commenter Dan, in response to the last post, voiced the idea:
And there has to be a motive behind the massive push to create driverless vehicles. It's not as if the technology is cheap, it's not. It's a very expensive bit of engineering to design, create and implement. It makes one ask why....why are they trying so hard to get this technology into the real world. The realist in me leads me to conclude that such technology will assist those in power with what they enjoy most. CONTROL. If left unfettered eventually autonomous vehicles will become practical ( a term that is subject to debate of course). Eventually the technology will become widespread.....and once it does the power mongers will do what they always do....legislate. They will seek to make it illegal to use a vehicle that is NOT autonomous....because "do it for the children" etc. Once they succeed in banning vehicles that humans can control they will essentially have TOTAL control over all transit and travel in America. And THAT is worth the cost....both in $$$ and in lives...at least to a politician.
I'm sure a lot of people think this, but I'm not sure I want to "go there".  While it's a possible motivation, that's playing a long game.  Many of us, perhaps curmudgeonly, think that this is not coming immediately; it's 25 or more years out.  With a few exceptions, government has shown over and over that they're really not capable of thinking for the long game.  "Long" means the next election cycle.  That said, there have been a few playing a conscious long game of slowly taking over.

I think the answer is more immediate and it's evident in the story about competing standards for the vehicle communications.  The "computer revolution" of the 80s and 90s made the electronics industry, especially the semiconductor side, extremely hype cycle driven.  The chip makers built infrastructure to supply parts for a demand like the computer sector had during those days.  Now that computer sales have dropped, they're constantly looking for the Next Big Thing. 

Today, the Next Big Thing seems to be "The Internet of Things That Don't Quite Work Right".  However autonomous cars are a gold mine of epic proportions for the chip makers.  I think they say the typical car has around 25 microprocessors in it now; that number will jump, and the number of sensors (like the radar or synthetic vision) will skyrocket.  The number of this kind of sensors now is essentially zero.

There are other factors, of course.  For one, the hype is creating interest among the buying public.  As the AAA study I wrote about months ago says, while the majority of people surveyed are afraid to ride in a fully self-driving vehicle, the survey also found that a bigger majority (59%) wants to have autonomous features in their next vehicle. For another, it's clear that agencies like the National Highway Traffic Safety Administration (NHTSA) wants them, evidenced by them goosing the process to put a regulatory framework in place.

So I see this as sort of a Perfect Hype Storm: the semiconductor industry wants it to sell chips every year*, the auto industry wants to help flagging sales, the Feds probably want it because those deplorable people won't cause so many accidents (greased liberally with knowledge the industries involved will be spreading money around for influence in perpetuity), and finally, the people are intrigued by the idea of perhaps having the car handle some of the tedium of a daily chore.  
 
A look at the V2X communications space, illustration from reference PDF at Innovation Destination article.

* The largest sales volume single part that semiconductor giant Analog Devices sells is not an analog integrated circuit.  It's the deceleration sensor in your seat belts that locks them in event of an accident or other sudden jerk on the belt.  It's a MEMS device (Micro Electro Mechanical System).


Saturday, November 25, 2017

Lightning Strikes Leave Behind Gamma Radioactivity

Back in 2013, I posted a story on the discovery of what was being called "Dark Lightning".  Lightning was found to be producing enough energy to produce gamma rays and antimatter.  The diffuse gamma radiation and the spotlight beam of antimatter were called Dark Lightning.  The main reason for posting this was that it was cool to find that lightning, which is an everyday occurrence around here for half the year, is producing gamma radiation, something which had been thought to be far too energetic for a thunderstorm.  (The secondary reason was that the researcher behind the paper was a professor from our local college, the Florida Institute of Technology; and the professor had been a faculty advisor to the young padawan engineer I was helping at the time. )

In the intervening years, researchers have continued to investigate the gamma ray production and this week ARS Technica reports that Japanese researchers have discovered lightning leaves behind a "radioactive cloud".
Gamma rays rays are primarily noted for their interaction with the electrons of any atoms they run into—it's why they're lumped in the category of ionizing radiation. But they can also interact with the nucleus of the atom. With sufficient energy, they can kick out a neutron from some atoms, transforming them into a different isotope. Some of the atoms this occurs with include the most abundant elements in our atmosphere, like nitrogen and oxygen. And, in fact, elevated neutron detections had been associated with thunderstorms in the past.

But a team in Japan managed to follow what happens with the transformed atomic nucleus. To do so, they set up a series of detectors on the site of a nuclear power plant and watched as thunderstorms rolled in from the Sea of Japan. As expected, these detectors picked up a flash of high-energy photons associated with a lightning strike, the product of accelerated electrons. These photons came in a variety of energies and faded back to background levels within a couple of hundred milliseconds.

But about 10 seconds later, the number of gamma rays started to go back up again, and this stayed elevated for about a minute. In contrast to the broad energy spectrum of the initial burst, these were primarily in the area of 500 kilo-electronVolts. That happens to be the value you'd get if you converted an electron's mass into energy.
The article goes into fair amount of detail, but one of the conclusions is that one of the gamma ray reactions in the storm acts to produce Carbon 14.  
The 500keV photons the authors were seeing weren't a direct product of the radioactive decay. Instead, 13N and 15O decay by releasing a neutrino and a positron, the antimatter equivalent of the electron. These positrons will then bump into an electron in the environment and annihilate it, converting each of the particles into a gamma ray with the energy equivalent of the electron's mass. That's exactly the energy the researchers were detecting.

(The neutrons that are kicked out typically recombine with other atoms.  For example, adding a neutron to 14N causes it to kick out a proton, which forms a hydrogen atom. The remaining nucleus is 14C, a relatively long-lived radioactive isotope of carbon.)
That means that not only is lightning producing gamma rays, the changes caused by the gamma rays are then producing the radioactive decay that they're observing. 

The ratio of 14C to 12C is how carbon dating is done, and Wikipedia says the conventional story on the ratio was that the amount of 14C was set by cosmic radiation changing Nitrogen to Carbon.  Now that we know it can happen due to thunderstorms that seems like it could conceivably change the dating methods.  If nothing else, the amount of 14C coming from thunderstorms could be higher in the tropics and lower near the poles, giving different ratios of the two isotopes that are dependent on latitude and not age.

It's interesting to watch this relatively new science being developed.  The ARS article concludes with the observation that "This doesn't mean that thunderstorms are major radiation risks." which seems to be overstating the obvious.  If the radiation from thunderstorms was high enough to be troublesome, it would have been detected ages ago - we can detect far smaller amounts of radioactivity than are dangerous.  It's cool, though, to see new phenomena being discovered in things that we've been seeing for as long as we've been aware. 



Friday, November 24, 2017

Autonomous Cars - Part 4

I don't know about you, but my conclusion about radars was that the problem of detecting things, how far away they are and relative speeds is doable.  A radar can detect a large signature like a car at any distance you expect to find in traffic.  Radars routinely find objects with a smaller cross section than a car over hundreds of miles; tracking a car "7 car lengths" ahead is trivial.  The accident I started this discussion with, a Tesla on autopilot decapitating its driver by going under a tractor trailer, was due to Tesla misusing the system by configuring it to "avoid false braking events" (by either the software response to strong returns or the position of the antenna in the car.)  Note to Tesla fanboys: I know NHTSA absolved Tesla of any fault, saying it's not their problem.  I disagree with NHTSA and think they were stupid. To quote EETimes
Did NHTSA let Tesla off the hook too easily? Absolutely.
Likewise, the problems raised in the comments here about cars interfering with each other or signals bouncing off of tunnel walls or bridges and other common examples are not fundamentally different from problems managed successfully in other systems.  Yes we have problems with cellphones and other commercial systems.  There's a fundamental difference between systems that were designed to be robust in the face of understandable problems and systems that aren't.  The high reliability world approaches things differently than the commercial "let's slap this together and ship it" world.  Your wired phone in your house will fail if everyone in the neighborhood tries to use their phone because the systems are designed for about 10% of the lines being used.  It doesn't have to be that way, it's just cheaper that way and has been proven to work well enough. 

As I said in the section on the image recognition piece, I think we have to conclude an image system can't do everything we need it to do, but could it "work well enough"?  I don't think so.  The problem with image rejection systems is that they take millions of training sessions to be reasonably accurate for a small set of things, and here we have a nearly limitless set it has to recognize and know what to do about.  Do we need nearly infinite training time?

In a larger sense, the problem with the whole idea of Advanced Driver Assistance Systems (ADAS) is that individually, all the pieces of hardware can be envisioned to do the job, but taken as a system, there isn't a chance that the software can work.

If we think of autonomous cars as "cruise control that includes the steering wheel" and simply stays in its lane, even that's getting dicey.  A fully autonomous car that you get into and say, "take me to work" or "take me to grandpa's house" and does so while you just sit as a passive cargo is probably more like 50 years away.  I did five hours of driving yesterday and I really would have liked to not have to deal with traffic in the rain and let JARVIS drive, but JARVIS is a comic book character.  Note to fans of the the "Singularity" when computers suddenly surpass the sum of all human intelligence: let's just call it vaporware.  People have been talking about this coming for close to 30 years.  It's much like nuclear fusion reactors that I first read were "20 years away" back in 1971.  If it happens before then, all bets are off, but as I've said many times and places (most recent), AI is over-hyped.  

Driving is a perfect example of the old saying, "hours and hours of boredom interrupted by moments of shear terror".  What do you do when you see a truck stopped on your street for a delivery?  You probably slow or even stop to look for other traffic and an opportunity to go around the truck.  Unless the ADAS car is programmed to do so, it's not going to know what to do.  What if there's a broken down car in the middle of your lane?  Or any one of a thousand oddities you see on the road in the course of a year.  Those are easy problems.  Without true intelligence, the software has to recognize it's a truck, understand the laws, understand what it can and can't do and choose the right option.  What about watching for kids darting between cars on the same street, or riding their bike on the shoulder of the road?


As I concluded last time, it's impossible to teach the computer "if a child runs out after that ball, slam on the brakes, and if you can't stop, hit something like a parked car".   If it was my kid in the intersection, I'd still prefer a real human mother to any computer driving a car because a real human mother is very likely going to care more about any child than damaging her own car.  A computer isn't going to understand the concept of "child" or "person".  They need to be much more sophisticated AI systems than we have now.  I'm not talking about the actor-voiced "Watson" on the commercials; I'm talking about a really advanced Turing test where you could talk to the computer for a long time and not know it's artificial.  Good luck with concepts like "do I hit the adult on the bike or injure my passengers by hitting the parked bus?" 

The truth of the matter is that driving has difficult moments and not all accidents are due to someone drunk, texting, or on the phone.  Some accidents come down to not having good options left.  

As many commenters have pointed out, the biggest risk in these systems is the combined mad dash to put them into place on the part of industry and the Fed.gov itself.  The public is less sanguine about autonomous cars, with a AAA survey I reported on in September showing
...three-quarters of Americans reported feeling afraid to ride in a self-driving car. One year later, a new AAA survey found that fear is unchanged. While the majority are afraid to ride in a fully self-driving vehicle, the latest survey also found that the majority (59%) of Americans are keen to have autonomous features in their next vehicle. This marked contrast suggests that American drivers are ready embrace autonomous technology, but they are not yet ready to give up full control.
Commenter Dan said on the second post,
I suspect that sooner rather than later we are going to see one of these autonomous vehicles murder a family of children or a school bus full of kindergarden kids and the legal fallout will end this experimentation for a very long time.
and I've frankly been waiting to hear that news with dread.  I'm sure the ambulance chasers are chomping at the bit for it to happen.  They like to go after the deepest pockets possible and among the deepest pockets possible are in play here.  I can see them going to court even though the "expert" NHTSA ruled in Tesla's favor in the decapitation accident.  Juries are under no requirement to follow the  NHTSA, in my understanding.  If a jury awards a big punitive award in a case like this or like Dan describes, it could well put the development of these systems on ice. 
Now what do I do?


Wednesday, November 22, 2017

I'm About Black Friday'ed Out

I don't know about you, but I swear I started seeing black Friday ads in July.  For sure, for the last month, I must be getting 50 to 75 emails a day with black Friday in the subject.

When did this become a national thing?

Black Friday was supposed to be called that because it was the day where businesses turned their annual ledgers from red ink to black ink, but in the last few years it seems to have morphed into something else.  It has been reported for years that the big deals aren't necessarily really deals at all, or that some companies raise their prices in the weeks (months?) before the day so that what would have been a normal, small discount from MSRP suddenly seems like a deal.  It's being reported that more and more people are carrying their smartphone into the stores to price check things, check for price and availability at other stores, or get coupons.

Once there started to be a perception that good deals came on black Friday, it was only a matter of time until it became just another way of saying "BIG SALE!".  But shoppers like to think they're getting big deals, and there are stores that put one or two items on a massive discount to get some people to line up the night before.  Maybe they can get some buzz on the news.  Of course, now that stores are opening on Thanksgiving itself, Friday seems like it loses some drawing power.  Still, every year there's some incident where people get violent over something stupid.

It always pays to know what going prices are.  I've heard that generally speaking, the best time for deals is closer to Christmas, especially right before Christmas.  You'll get better prices than this week, but it's a gamble.  You're betting that the stores will be stuck with some of an item you want and would rather discount it than not sell it.  If they sell out first you lose.  If they don't sell out and don't/can't cut the price you lose.  That said, it has worked out for me in the past. 

Retail is a rough way to make a living. I'm sure you've heard how airline reservation systems base the seat price on the apparent interest in a flight.  If you go back and check on the price of that seat every week, the system says there must be more demand for that flight and raises the price.  What if stores could measure real time demand and adjust the price.  Say you're looking for a new tool or other gadget, what if they see you checking the web site regularly and interpret that as more people interested in that and raised the price.  Would you be upset or offended?  What if they dropped the price to see at what level you can't resist pushing the Glistening Candy-like "BUY IT" button?  I don't have any hard evidence that anyone does that, but it seems trivial for an online store to track interest in something.  The biggest risk is scaring away customers.

To me the Golden Rule is the willing seller/willing buyer.  My inner engineer drives me to optimize things, but if people are happy with what they paid, regardless of whether or not it really is "the best price of the year", and the seller is happy with they got for it, that's definition of a fair price.  I'm sure not gonna poop in anyone's Post Toasties.

As for me, I've never gotten up early to go do a black Friday shopping expedition, and it's doubtful I ever will.


I'll be taking off tomorrow.  Everyone have a wonderful and blessed Thanksgiving.  Not "Turkey Day", but a day for giving thanks. 


Tuesday, November 21, 2017

Autonomous Cars - the Sensor Problem - Part 3

So far in this series, we've looked at the radars being proposed for the task of monitoring traffic.  The radars will judge if cars are in adjacent lanes and the relative velocities of those cars to determine if a potential collision is developing.  For example, the forward looking radar might determine the car ahead has suddenly slowed dramatically and if the car's ADAS doesn't apply brakes, we're going to hit it.  Or it might see if the adjacent lane is unoccupied in case we need to switch lanes.  A side looking radar can also see if a car on a crossing path is approaching.

All of this seems rather straightforward, but consider the radar systems designers in the Tesla incident we started this series with.  Did they not consider that the interstate signs would be big reflectors and that their radar would illuminate them?  Or did the antenna design get compromised while trying to fit it into the cars body?  Remember, Elon Musk tweeted, “Radar tunes out what looks like an overhead road sign to avoid false braking events."  Tuning out a return is not a radar guy's words.  The software either ignored any return over some level, or they took measures to ensure they never got those high levels, like perhaps aiming the antenna down. 

Now let's look at the system that really seems to have the least chance of working properly: artificial vision.  A vision system is most likely going to be used to solve an obvious problem: how does the car know where the lane is?  That's what a lot of early work in autonomous cars focused on and there are times and conditions where that's not at all trivial.  Snow or sand is an obvious concern, but what about when there's road construction and lanes are redirected?  Add a layer of rain, snow or dirt on top of already bad or confusing markings and the accuracy will suffer.  When the paint is gone or its visibility goes away, what does the system do? 

A few weeks ago, Borepatch ran a very illuminating article (if you'll pardon the pun) about the state of AI visual recognition.
The problem is that although neural networks can be taught to be experts at identifying images, having to spoon-feed them millions of examples during training means they don’t generalize particularly well. They tend to be really good at identifying whatever you've shown them previously, and fail at anything in between. 
Switch a few pixels here or there, or add a little noise to what is actually an image of, say, a gray tabby cat, and Google's Tensorflow-powered open-source Inception model will think it’s a bowl of guacamole. This is not a hypothetical example: it's something the MIT students, working together as an independent team dubbed LabSix, claim they have achieved.
This was a recent news piece in the Register (UK).  In the mid-80s, I took a senior level Physical Optics class which included topics in Spatial Filtering as well as raytracing-level optics.  The professor said (as best I can quote 30+ years later), “you can choke a mainframe trying to get it to recognize a stool, but you always find if you show it a new image that's not quite like the old ones it gets the answer wrong.  It might see a stool at a different angle and say it's a dog.  Dogs never make that mistake”.  Borepatch phrased the same idea this way: “AI does poorly at something that every small child excels at: identifying images.  Even newborn babies can recognize that a face is a face and a book is not a face.”  Now consider how many generations of processing power have passed between my optics class and the test Borepatch described, and it just seems that the problem hasn't really been solved, yet.  (Obvious jokes about the dog humping the stool left out to save time).

Borrowing yet another quote on AI from Borepatch
So why such slow progress, for such a long time?  The short answer is that this problem is really, really hard.  A more subtle answer is that we really don't understand what intelligence is (at least being able to define it with specificity), and so that makes it really hard to program.
That's my argument.  We don't know how our brains work in many details - pattern or object recognition is just the big example that's relevant here.  A human chess master looks at a board and recognizes patterns that they respond to.  IBM's Watson just analyzes every possible move through brute force number crunching.  The chess master doesn't play that way.  One reason AI wins at chess or Go is that they play games differently than people do, and the people the AI systems are playing against are used to playing against other people. 

We don't know what sort of system the Tesla had, whether it was photosensors or real image capture and real image analysis capability, but it seems to be the latter based on Musk saying the CMOS image sensor was seeing “the white side of the tractor trailer against a brightly lit sky”.  The sun got in its eye?  The contrast was too low for the software to work?  It matters.  In an article in the Register (UK), Google talked about problems their systems had in two million miles of trials: things like traffic lights washed out by the sun (we've all had that problem), traffic lights obscured by large vehicles (ditto), hipster cyclists, four way stops, and other situations that we all face while driving.

A synthetic vision system might be put to good use seeing if the car in front of it hit the brakes.  A better approach might be for cars to all have something like a MAC (EIU-48) address and communicate to all nearby cars that vehicle number 00:80:c8:e8:4b:8e has applied brakes and is decelerating at X ft/sec^2.  That makes every car have software that's tracking every MAC address it can hear and determine how much of a threat every car is. 

A very obvious need for artificial vision in a car is recognizing signs.  Not just street signs, and stop signs, but informational signs like "construction ahead", "right lane ends" and other things critical to safe operation.  It turns out Borepatch even wrote about this topic.  Quoting that article from the Register, a confession that getting little things right that people do every day is still overwhelming the Google self-driving cars 
You can teach a computer what an under-construction sign looks like so that when it sees one, it knows to drive around someone digging a hole in the road. But what happens when there is no sign, and one of the workers is directing traffic with their hands? What happens when a cop waves the car on to continue, or to slow down and stop? You'll have to train the car for that scenario.

What happens when the computer sees a ball bouncing across a street – will it anticipate a child suddenly stepping out of nowhere and chasing after their toy into oncoming traffic? Only if you teach it.
It's impossible to teach ethics to a computer.  It's impossible to teach the computer "if a child runs out after that ball, slam on the brakes, and if you can't stop, hit something like a parked car".  A computer isn't going to understand the concept of "child" or "person".  Good luck with concepts like "do I hit the adult on the bike or injure my passengers by hitting the parked bus". 

But that's a question for another day.  Given the holiday, let's pencil in Friday. 
Question for the ADAS: now what? 


Monday, November 20, 2017

A World of Absurdities - Economic, That Is

One of the economics sites I read regularly is Mauldin Economics, by John Mauldin.  He was recommended to me first by a reader here, and my apologies for not remembering who you are.

John has been preparing for a big conference in Switzerland and this week's email, Bonfire of the Absurdities, is a summary of what he's presenting.  I really recommend you Read The Whole Thing.  As usual, I'm going run a few snippets to whet your appetite.  Mauldin looks at a handful of economic indicators and more or less echoes my observation: there isn't a week that goes by that something doesn't happen to make me say, "the whole world has gone completely FN".  He's a little more polite.

Let's start with a graph a lot of you have already seen: the Federal Reserve bank's assets as a percentage of GDP.
Things went a little wonky there, somewhere around 2008, no?  Over to Mauldin:
Not to put too fine a point on it, but this is bonkers. I understand that we were caught up in an unprecedented crisis back then, and I actually think QE1 was a reasonable and rational response; but QEs 2 and 3 were simply the Fed trying to manipulate the market. The Keynesian Fed economists who were dismissive of Reagan’s trickle-down theory still don’t appear to see the irony in the fact that they applied trickle-down monetary policy in the hope that by giving a boost to asset prices they would create wealth that would trickle down to the bottom 50% of the US population or to Main Street. It didn’t.
In other words, the Fed is as good at seeing the irony of what they do as Antifa.  The really absurd point here is that the Federal Reserve's assets are under 30% of GDP.  The European Central Bank and the Bank of Japan have both grown their balance sheets more than the US has. The Bank of Japan’s balance sheet is almost five times larger in proportion to GDP, and it's still growing.

As long as he's in Switzerland, he needs to show a little of their absurdities, too. The Swiss National Bank (SNB) is now the world’s largest hedge fund.
The SNB owns about $80 billion in US stocks today (June, 2017) and a guesstimated $20 billion or so in European stocks (this guess comes from my friend Grant Williams, so I will go with it). 

They have bought roughly $17 billion worth of US stocks so far this year. And they have no formula; they are just trying to manage their currency.

Think about this for a moment: They have about $10,000 in US stocks on their books for every man, woman, and child in Switzerland, not to mention who knows how much in other assorted assets, all in the effort to keep a lid on what is still one of the most expensive currencies in the world.
And they're barely doing it.  If people deposit money in Swiss bonds, they don't earn yield, they pay for the privilege of losing money in Switzerland!  Switzerland is fighting a monstrous battle to keep their currency from going up.  Yet, that's still not the most absurd thing here.
Not coincidentally, European yields are at rock bottom, or actually below that, in negative territory. And what is even more absurd, European high-yield bonds, which in theory should carry much higher rates than US Treasury bonds, actually yield below them. Here’s a chart from old friend Tony Sagami:
Interest rates are supposed to reflect risk. The greater the risk of default, the higher the rate, right? Yet here we see that European small-cap businesses are borrowing more cheaply than the world’s foremost nuclear-armed government can. That, my friends, is absurd.

Understand, the ECB is buying almost every major bond it can justify under its rules, which leaves “smaller” investors fewer choices, so they move to high-yield (junk), driving yields down. Ugh.
The common name for high-yield bonds is "junk bonds", because they have a high risk of default.  Here we find that European junk bonds, which (again) should have the highest yield, are earning less than US Treasuries.  (It doesn't say which term US Treasury, and there are many.  Sorry.)  Does this mean buyers think of the US as junk bonds?  Or do they not make the association and just go where they can get any yield? 

Let me leave you with one other plot to get a feel for the absurdity.  This is the total US stock market cap to GDP.  It is now the second highest - at least in this 46 year plot - second only to the dot com bubble of the late 90s and much higher than the bubble that popped in '08.  Really, one good rally, an optimistic "we love 2017!" run up, could put us at the same levels as the dot com peak or beyond.  I wonder how that's going to work out.     



There's plenty of absurdity left, and lots of stuff to make you go "hmmm".  Go read.


Sunday, November 19, 2017

Autonomous Cars - the Sensor Problem - Part 2

The first part of this look at the problems with these systems talked about a handful of radar systems that are likely to be on every car.  These are being proposed to work at millimeter-length wavelengths, frequencies very high in the microwave spectrum.  TI proposed 76 to 81 GHz, but think of them as someone offering a solution, rather than a consensus of system designers.

Let's take a look at radar systems, starting with the basics.

Radar is an acronym that has turned into a word: RAdio Detection And Ranging.  Radio waves are emitted by a transmitter, travel some distance, and are reflected back to the receiver, which is generally co-located with the transmitter (there are systems where they can be widely separated - bistatic radars).  Their signals can be any radio frequency, but higher frequencies (microwaves and higher) are favored because as frequency goes up, size resolution - the ability to accurately sense the size of something - gets progressively finer. If you're making air defense radars, it's important to know if you're seeing one aircraft or squadron flying in tight formation.  Higher frequencies help. 

What can we say about systems like TI is proposing?  A wavelength at 78 GHz is 3.84mm, 0.151" long.  The systems will be able to sense features 1/2 to 1/4 of that wavelength in size, and distinguish things as distinct that are only about 8/100" apart.  That simply isn't needed to look for nearby cars, pedestrians, or even small animals in the road.  If you're looking for kids on bikes, you don't need to resolve ants on the sidewalk.  On the other hand, these frequency bands are lightly used or unused, containing lots of available room for new systems. Which they'll need.

The other thing to know about radar is that since it's a radio wave, it travels at the speed of light, like anything in the electromagnetic spectrum including visible light. This means that for ADAS uses, a radar system is going to need to transmit and receive very fast.  The speed of light is roughly 186,000 miles/second; expressed in inches that's 11.8 billion inches/second.  Stated another way, light travels 11.8 inches in one nanosecond.  For our purposes, we can say light travels one foot per nanosecond in air.  Ordinary radars, whether tactical radars or weather radars, are intended to operate over miles; these vehicle systems won't operate over more than 10 or 20 feet, with the exception of something looking forward for the next car, which needs to work over hundreds of yards.  Radar system designers often talk about a "radar mile", the time it takes for a radar signal to go out one mile and bounce back to the receiver.  (A statute radar mile is 10.8 microseconds.)  We don't care about miles, we care about "radar feet". 

A car in the next lane won't be more than 20 feet away, giving some room for uncertainty in the lane position, so it doesn't seem like a system needing to look a lane or two over would care about returns from more than 40 feet away.  In "radar time" that's (40 feet out and 40 feet back) 80 feet at 1 ft/nsec, so the time from transmit to receive is 80 nsec.  A system could put out a pulse, likely corresponding to a few inches, like 0.25 nsec, listen for its return for up the desired distance, then repeat.  It could repeat this transmission continuously, every 80 nsec (plus whatever little bits of time it takes to switch the system from receive back to transmit), but that would require blazingly fast signal processing to handle continuous processing of 80 nsec receive periods and I think it doesn't have to.  Things in traffic happen millions of times slower than that, fractions of a second, so it's likely it could pulse several times a second, say every 1/100 second, listen for the 82 nsec and then process the return. 

For looking a quarter mile down the road, 440 yards each way, that becomes listening for 2.64 microseconds. 

I'm not a "radar algorithms guy", so I don't have the remotest feel for how much processing would be involved, but allowing 1/100 of a second to complete the processing from one 82 nsec interval, and allowing the same or even a little more time to complete processing for a 2.64 microsecond interval doesn't seem bad.  

Asking what sorts of power they'd be transmitting starts to involve more assumptions than I feel comfortable making about what antennas they'd use, the antenna patterns, their gain, and far more detail, but some back of the envelope path loss calculations make me think that powers of "10-ish" milliwatts could work.  That shouldn't be a problem for anyone. 

Chances are you have, or know someone who has, a car with back up Sonar in it: sensors that tell the driver as they get within some "too close" distance to something behind them.  The senors are typically small round spots on or near the rear bumper that measure the distance to things behind the vehicle by timing the reflections of an ultrasonic signal (I've seen reference to 48 kHz) - they're the round black spots on the bumper in this stock photo.

Since the speed of sound is so much lower than the speed of light, the whole description above doesn't apply.  While I don't have experience with ultrasonics, it seems the main thing it gives up is the resolution of the radar, which is already finer than we need.  Ultrasonics might have their place in the way autonomous cars can be implemented. 


Saturday, November 18, 2017

Bubba Doesn't Just Gunsmith

Sometimes Bubba works on electronics.
At least a year ago, this APC Back UPS 1300 died.  The batteries, already a replacement set, wouldn't take a charge anymore. It has sat in the shop, upside down, waiting for me to do something about it. 

During Irma, another UPS started emitting the unmistakable smell of burning electronics.  We shut it down to troubleshoot it after the storm.  I pulled the batteries and did a life test on them with my Computerized Battery Analyzer (CBA-IV).  The batteries were fine.  Put the system back together and it ran for a while, then starting smelling like smoke again.  Not good.

So the UPS itself was scavenged for useful parts and the batteries put aside.  Yesterday, I put "2+1" together and put the two good batteries into the old but good UPS.  It seems to work fine and doesn't stink.  There's the small matter of the batteries being too big for the case, but that's kind of a feature.  It gives more back up time than the original.  If I wasn't quite as willing to live with the battery cover duct-taped on, I'd figure out how to make a new one.  I know!  I'll build a 3D printer from scratch to print a cover! 



Friday, November 17, 2017

Autonomous Cars - the Sensor Problem

In May of 2016, a Tesla car under "autopilot" control was involved in an accident that killed the person in the driver's seat.  Inevitably, whenever this accident is mentioned, someone feels the need to show up and say that no one is supposed to mistake autopilot for autonomous control.  If something goes wrong, the driver is responsible, not Tesla.  Nevertheless I find the accident instructive if we want to think about the kinds of problems autonomous cars need to get right all the time. 
In that collision, which occurred at about 4:30 in the afternoon on a clear day, a truck turned left in front of the Tesla which didn't brake or attempt to slow down.  This is the kind of thing that happens every day to most drivers, right?  Should be a priority to program cars to not kill people in this sort of scenario.  The Tesla's optical sensors didn't detect the white truck against the bright sky, and its radar didn't react to it either.
The Tesla went under the truck, decapitating the driver, then drove off the road onto a field near the intersection. 

It's not hard for a human with vision good enough to get a driver's license to see a truck against the sky background.  As I've said many times before, once a child knows the concept of "truck" and "sky" - age 3? - they're not going to mistake a truck for the sky or vice versa. 
Tesla’s blog post followed by Elon Musk’s tweet give us a few clues as to what Tesla believes the radar saw. Tesla understands that vision system was blinded (the CMOS image sensor was seeing “the white side of the tractor trailer against a brightly lit sky”). Although the radar shouldn’t have had any problems detecting the trailer, Musk tweeted, “Radar tunes out what looks like an overhead road sign to avoid false braking events.'"
The way I interpret that statement is that in an effort to minimize the false/confusing returns the radar sees In Real Life (what radar guys call clutter), which is to say in an effort to simplify their signal processing, the radar antenna was positioned so that its "vision" didn't include the full side of the truck.  It shouldn't be impossible to distinguish a huge truck almost on top of the car from large street sign farther away, by the reflected signal and its timing.  Perhaps they could have worked at refining their signal processing a bit more and left the radar more able to process the return from the truck.  The optical sensors have the  rather common problem of being unable to recognize objects.  On the other hand, we've all had the experience of a reflection temporarily blinding us.  Maybe that's the sensor equivalent.  

A recently created electronics industry website, Innovation Destination Auto, a spinoff of Electronic Design magazine, runs a survey article on automotive radars for the Advanced Driver Assistance System (ADAS) market.  There is a lot of work being done on radars for cars.  Radar systems for cars are nothing new; that has been going on for decades.  What's different this time is the emphasis on sensing the total environment around the car. 

It's all about enabling the car to know everything going on around it, which it absolutely has to do.

Electronic devices such as millimeter-wave automotive radar systems are helping to evolve the automobile into a fully autonomous, self-driving vehicle. The Society of Automotive Engineers (SAE) International has actually defined six levels of driving automation, from level 0, with no automation, to level 5, with full automation and self-driving functionality. Different types of sensors within a car, including millimeter-wave radar transceivers, transmit beams of energy off different objects within their field of view, such as pedestrians or other cars, and detect the reflected returns from the illuminated objects. Sensor outputs are sent to one or more microprocessors to provide information about the driving environment for assistance with driving functions such as steering and braking to prevent collisions and accidents.

Multiple sensors are needed for 360-deg. detection around an ADAS automobile. Often, this involves sensors based on different forms of electromagnetic (EM) energy. Automotive radar sensors typically incorporate multiple transmitters and receivers to measure the range, angle, and velocity of objects in their field of view. Different types of radar systems, even different operating frequencies, have been used in ADAS systems, categorized as ultra-short-range-radar (USRR), short-range-radar (SRR), medium-range-radar (MRR), and long-range-radar (LRR) sensors or systems.
The article is "Sponsored By" Texas Instruments, among the largest semiconductor companies in the world, and links to some radar Systems On A Chip they've developed for the automotive market. 

The different types of radar serve different purposes, such as USRR and SRR sensors for blind-spot-detection (BSD) and lane-change-assist (LCA) functions and longer range radars for autonomous emergency braking (AEB) and adaptive-cruise-control (ACC) systems. USRR and SRR sensors once typically operated within the 24-GHz frequency band, with MRR and LRR sensors in the 77-GHz millimeter-wave frequency range. Now, however, the frequency band from 76 to 81 GHz is typically used, due to the high resolution at those higher frequencies—even for shorter distance detection.
It seems to me that these are going to be fairly simple systems with low power transmitters and receivers.  Even the "LRR" (long-range-radar), shouldn't be too demanding on design.  There's a lot of variables I'm sweeping under the rug here, but a car needs to see a few hundred yards at most, and the demands on those radar transmitters and receivers don't strike me as being severe.  

This is just the beginning.  Truly autonomous cars should probably communicate with each other to work out collision avoidance similar to how aircraft do.  It has been proposed.  It should be easier for cars.  Cars can stop.  Aircraft can't.
After the August eclipse, there were reports of horrific traffic jams in several places.  I know I posted about it, as did Karl Denninger and some other people.  What this means is that the road infrastructure is incapable of handling the traffic when it goes above some normal range.  I recall hearing that in a metropolitan area, like around Atlanta where there always seems to be trouble, adding lanes to the interstate costs millions per mile.  No sooner are the lanes built than more lanes are needed.  One of the attractions of autonomous cars is that they should be able to drive higher speeds in denser patterns, getting the effect of more carrying capacity in the highway without adding lanes.  Since they're all communicating with each other, chances of an accident should drop precipitously.  I think that's one reason the governments seem to be pushing for autonomous vehicles. 

About That Whole GQ Story

There's a lot of buzz over GQ picking Colin Kaepernick as their "Citizen of the Year".
I haven't said anything but I just want to pass on what I think is going on.  The easy one to shoot for is that their political beliefs align with his.  I think that's secondary.  The big reason is that GQ is failing, like most magazines, and it has been years since anyone has said "there's a lot of buzz over GQ" or since GQ has made news at all.  If they ever have. 

There's a quote attributed to PT Barnum that "there's no such thing as bad publicity", and I think they're just dying for people to notice they're still around. 


Thursday, November 16, 2017

Another Star Talks About Harassment

Showbiz legend Kermit tells of what he had to endure to get his break in Hollywood.


This in the wake of Bugs Bunny's revelations on Virtual Mirage


Wednesday, November 15, 2017

Is There a Future Role for Humanoid Robots

Remember Marilyn Monrobot and founder Heather Knight from back in 2011?  Dr. Knight believed that service robots which interacted with people would need to be humanoid, and if they needed to humanoid they would need to be more human and less creepy.  Her phrase was "Devilishly Charming Robots and Charismatic Machines," and she worked on social aspects; programming robots to interact more like people.  She even did a standup comedy shtick routine with a robot in which the robot adapted its jokes to the audience reactions.  Robotics researchers talk of something called "the uncanny valley":
Part of her mission is to address the so-called "uncanny valley" -- a moniker used by roboticists to describe the phenomenon wherein humanoid robots give the creeps to real humans (which most of you probably are).
Robots, of course, have been moving into industry since just about forever and I think no one ever uses the terms charming or charismatic for industrial robots.  Utilitarian at best.  Furthermore, more robots are coming.  According to the Boston Consulting Group, by 2025, robots will perform 25% of all labor tasks.  Robots are becoming better, more capable and cheaper.  The four industries leading the charge are computer and electronic products; electrical equipment and appliances; transportation equipment; and machinery. They will account for 75% of all robotic installations by 2025.

Machine Design presents this breakdown of the market:
In a recent report from Berg Insight, the service robot base is expected to install 264.3 million units by 2026. In 2016, 29.6 million service robots were installed worldwide. The robots in the service industry broke down into the following groups:
  • Floor cleaning robots accounted for 80% of total service robots, with 23.8 million units
  • Unmanned aerial vehicles accounted for 4 million units
  • Automated lawnmower units tallied 1.6 million units
  • Automated guided vehicles installed 0.1 million units
  • Milking robotic units tallied to 0.05 million units
The remaining segments included humanoid robots (including assistant/companion robots), telepresence robots, powered human exoskeletons, surgical robots, and autonomous mobile robots. Combined, they were estimated to have had less than 50,000 units installed.

Humanoid robots, while being one of the smallest groups of service robots in the current market, have the greatest potential to become the industrial tool of the future. Companies like Softbank Robotics have created human-looking robots to be used as medical assistants and teaching aids. Currently, humanoid robots are excelling in the medical industry, especially as companion robots.  [Wait ... "milking robotic units"?... Robotic milking machines? ... Sometimes I wish I could draw cartoons - SiG]
One might ask why?  Why should humanoid robots take over in so much of the world?  In industrial design, it's often the case that "design for test" or "design for manufacturability" means spaces are left around connectors so people can fit their hands in there.  Entire "human engineering" (ergonomics) specifications exist with typical hand sizes, typical arm lengths, and so on, so that it can be worked on by humans.  We're not talking about how close the keys on a keypad are, that's for the users.  We're talking about how close the hardware is to other features inside the box, where users don't go.
Softbank Corp. President Masayoshi Son, right, and Pepper, a newly developed robot, wave together during a press event in Urayasu, near Tokyo, Thursday, June 5, 2014. (AP / Kyodo News)

Softbank Robotics' sorta-humanoid robot Pepper  looks like something Dr. Knight would do (or research).  Pepper is far enough from looking human to be creepy. 

If humans aren't going to work on the product, why design it around a human assembler?  Why not design the thing for the optimum size and internal functions and design a special robot to assemble it?  If the robot is going to do the work, it doesn't have to have human sized hands or look human. Witness the daVinci surgical robots, which certainly aren't humanoid. 

On the other hand, if the human and the robot are going to be working side by side, that's the only reason to have the robot proportioned like a human.  Machine Design references Air Bus, saying they want to hand off some tasks that are currently done by humans to robots.
By using humanoid robots on aircraft assembly lines, Airbus looks to relieve human operators of some of the more laborious and dangerous tasks. The human employers could then concentrate on higher value tasks. The primary difficulty is the confined spaces these robots have to work in and being able to move without colliding with the surrounding objects.
A potential exception to that is the often-talked about use of humanoid robots as helpers for people with reduced mobility or other issues.  I don't think I care if the robot that picks me up out of bed 30 years from now looks particularly human, as long as it doesn't drop me.  On the other hand, there seems to be evidence that robots that look more human and capable of mimicking emotions can be useful with some patients. 
University of Southern California Professor Maja Matarić has been pairing robots with patients since 2014. Her robots helped children with autism copy the motions of socially assistive robots and, in 2015, the robots assisted stroke recovery victims with upper extremity exercises. The patients were more responsive to the exercises when promoted and motivated by the robot.
While the number of humanoid robots needed will very likely be small, there's little doubt that the future is very bright for robot makers and the people who will program them.

A prototype assembler robot for Air Bus.  The ability to climb a ladder like that is important to them. 

Tuesday, November 14, 2017

Three Days from Done?

Or done now?  My Breedlove is now presentable in polite company.  After determining that the cured polymer coating on it is insoluble and not going to be damaged by anything I can put on it, I bought a can of high gloss Minwax Polyurethane spray and did four coats today.


The wood itself is gorgeous.  It's quilted maple, stained with a mix of water soluble Transtint Dyes, blended up on a practice piece to see how I liked the color. Quilted maple reflects light differently with every move and that pattern is you see is only there with the light from that angle.  Despite looking wavy and almost bubbly, it's flat and smooth.  The wood was gifted to me by reader Raven, who offered it in reply to my late June post about putting a clear plastic side on this guitar.  Couldn't have done it without your help!

All in all, the finish looks pretty good, but not as "deep" or glossy as the factory finish.  The spray can instructions say to spray a light coat every two hours, and by the time I fussed over a detail that I didn't like, I got started close to 10AM.  Two hours after the third coat, I lightly sanded with 500 grit, cleaned with mineral spirits and shot a fourth coat.

My experience with the finish compatibility test over the weekend says this won't reach maximum hardness until late tomorrow at the earliest, in line with the can's warnings not to use the item for 24 hours.  Three days comes from the other instruction on the label saying:
Recoat within 2 hours. If unable to do so, wait a minimum of 72 hours, then lightly sand and recoat.
That says I could add more finish on top of what I have on Saturday.  My tentative plan is to try to buff the guitar with a mild polish.  Not rubbing compound but something beyond pure wax.  Tool Junkie Heaven for guitar techs offers electric buffers or foam polishing pads.  The pros use something like their buffing systems:
If the polish doesn't help, maybe I need to repeat adding three or four more coats on Saturday.

I can't do anything to it for now, so in the meantime, it's on to other projects.