Monday, May 02, 2016

Smart Guns and the Law


Last Friday, President Obama announced a series of actions aimed at making smart guns a reality, rather than a lab curiosity that has never gotten beyond the demonstration stage.  A smart gun is one that in principle can be used only by its authorized owner.  If we had a magic smart-gun-making wand that we could wave and thereby grant the beneficences of intelligence and the moral judgment of St. Thomas Aquinas to every gun in the U. S., well, I suppose we would no longer have to worry about any gun being wrongly used ever again.  But that would require that guns have more smarts and judgment than the owners, and nobody's expecting the technology to go that far.  Even if the technology worked perfectly, it's easy to see that smart guns would eliminate only a fraction of the accidental and intentional shootings that gun regulations are intended to reduce, because no gun can tell whether its owner is using it for good or bad purposes.  And you can rest assured that if the only kinds of guns available were smart guns, that's the kind that criminals would use. 

Admittedly, accidental shootings such as the ones involving small children are the most tragic and unnecessary ones.  And almost any kind of smart-gun technology would go far to prevent gun accidents involving children who gain access to guns.  But this kind of accident is a small proportion of the annual gun-fatality roll in the United States, making up less than 5% of the 12,000 or so gun-related deaths in 2014. 

The President has stopped short of measures that would put the purchasing power of the federal government in play.  Without any enabling legislation, for example, he could have mandated that all future gun purchases by the U. S. government would be smart guns only.  He probably realized that such a mandate would seriously handicap the FBI and other federal domestic law-enforcement personnel, because right now, there is no generally available smart-gun technology, because basically, nobody wants to buy one.

Anytime U. S. gun laws are discussed, the National Rifle Association has to be considered.  The NRA's official position is that they do not oppose smart-gun technology per se, but do not want it mandated by legal fiat.  Instead, the NRA prefers to let market forces lead the technological development.  This is a little bit like saying, "Let the market decide how many Ferraris we should make with speed-control governors keeping them from exceeding a speed of 60 miles an hour (100 km/hr)."  The whole point of buying a Ferrari is to be able to go fast, and the NRA knows very well that if the matter is left to the market, the market will go on rejecting the idea of smart guns, as it has for the last twenty-five years or more. 

There are two main reasons that smart guns and smart-gun laws have not proved popular:  one pertaining to the technology itself, and the other having to do with the legislators who would have to make the smart-gun laws.

The technological reason is that none of the dozen or more different approaches to making smart guns seems to work very well.  Some of them use biometric sensors—these are not yet advanced enough to be used for routine computer-ID purposes.  And a law-enforcement officer wants a gun that's at least as reliable as getting money out of an ATM.  Others depend on the user wearing some kind of wireless ID bracelet or RFID chip.  Well, gosh, what if you leave it at home with your other pair of trousers?  Or what if the crooks figure out a way to jam the RFID chip (that's not hard, incidentally)?  And so on.  Every single smart-gun technology idea has some potential for failure, which adds to the chances that a gun won't be usable when it's most needed.  To most potential gun purchasers, the incremental value added of knowing that unauthorized users can't fire the gun is not worth the complications of carrying around an RFID bracelet or hoping that your gun will recognize you despite your recent haircut, or whatever means it uses.

The second reason that most gun owners (and in reality, the NRA) detest the idea of smart-gun legislation is pointed out ably by Jon Stokes, a blogger at TechCrunch.com.  It turns out that the legislators who are most enthusiastic about gun regulation tend to know the least about guns.  He cites the example of the 1994 Federal legislation banning "assault weapons."  Now in order to ban something, you have to have at least a vague idea of what it is you're banning.  So the law had a kind of laundry list of features that made a gun an assault weapon, including such things as a vertical foregrip.  This is a kind of stick-like doohickey that extends down from the middle or so of the barrel and gives you something to do with your non-trigger hand.  The presence of that one little optional feature made the gun an assault weapon, and ipso facto illegal.  The 1994 law has been superseded since then, but Stokes points out that any smart-gun law will face the same problem:  what makes a gun smart?  What design features specifically qualify it to be a smart gun?  And inevitably, the lawmakers will be forced into the nitty-gritty of gun design, for which activity they are dubiously qualified at best.  

Guns have a special place in the American psyche.  Here in Texas, they are part of the culture to a degree that is unimaginable in San Francisco or Boston, and while I do not personally have any truck with guns, I have several friends who do own and use them responsibly.  Maybe the fact that President Obama is directing more federal R&D funds to the problem will uncover a single technology that will make smart guns as easy and reliable to use as the "safety" that keeps a gun from going off when set that way by the user, and which has been a standard feature of many firearms since at least 1911.  And maybe state or federal legislators will educate themselves enough on how guns really work and are used to pick the best smart-gun technology to require gunmakers to install.  But right now, I'm not seeing a lot of speed-controlled Ferraris on the road, and I would not risk a bet on smart-gun legislation getting very far any time soon.

Sources:  The New York Times and many other news outlets covered President Obama's announcement on Apr. 29 concerning smart guns at http://www.nytimes.com/2016/04/30/us/politics/obama-puts-his-weight-behind-smart-gun-technology.html.  I also referred to an article on Fox News at http://www.foxnews.com/politics/2016/04/28/obama-set-to-push-for-smart-gun-tech-despite-concerns.html.  The White House website carried a statement coordinated with the announcement at https://www.whitehouse.gov/blog/2016/04/29/update-what-were-doing-keep-guns-out-wrong-hands.  Jon Stokes' piece "Why the NRT hates smart guns" is on Techcrunch at http://techcrunch.com/2016/04/30/why-the-nra-hates-smart-guns/.  I also referred to the Wikipedia articles on smart guns and "Safety (firearms)."  The gun-fatality statistic is from http://www.gunviolencearchive.org/tolls/2014.

Monday, April 25, 2016

The Pemex Vinyl Chloride Plant Explosion


Unless you work in the petrochemical industry, you have probably never been near the substance called vinyl chloride.  It is a chlorinated hydrocarbon that is made when one of the four hydrogen atoms in the compound called ethylene is replaced by a chlorine atom.  On the other hand, unless you live in a house whose plumbing is all more than forty or so years old, you probably use products made with vinyl chloride every day.  Polyvinylchloride (PVC) pipes are used in the plumbing of nearly all new residential and business construction, and about 40 million metric tons (units of 1,000 kg) of PVC plastic were made in 2013.  But all PVC pipes were once the toxic, flammable liquid called vinyl chloride, and that is what may have got loose at the Pemex chlorinate 3 plant in the Gulf Coast city of Coatzacoalcos, Mexico last Wednesday, Apr. 20.  The resulting explosion and fire killed at least 28 people and injured over a hundred, with more still missing as of today.

Besides the immediate human tragedy, this accident raises important questions about the safety record of the state-owned petroleum company Pemex.

At this writing, little is known about the cause of the blast.  Coatzacoalcos is a town at the very southernmost tip of the Gulf of Mexico, in the Mexican state of Veracruz between central Mexico and the Yucatan Peninsula.  It is one of the main export terminals for Mexican oil and is a logical location for a vinyl-chloride plant, since its manufacture requires large quantities of the petrochemical ethylene.  The chlorinate 3 plant is a joint venture between Pemex and a PVC-pipe manufacturer called Mexichem. 

As with many petrochemicals, vinyl chloride is hazardous in several ways.  If released into the air, it evaporates into a dense vapor and can catch fire if a source of ignition such as an automobile engine is nearby.  Worse yet, the products of combustion are themselves hazardous:  hydrogen chloride (which when dissolved in water makes hydrochloric acid), and phosgene, which was used as a poison gas in World War I.  Besides the danger of explosion and fire, vinyl chloride is extremely toxic, and causes liver damage in animals at concentrations in air as low as 500 parts per million.  Higher concentrations cause acute illness and even death.  Because of these hazards, vinyl chloride is usually stored in double-walled containers under pressure, with leak monitors that detect low levels of leakage from the inner container before the outer wall is breached. 

It may take months before we can learn exactly what happened at Coatzacoalcos, but it is obvious that a large amount of something flammable got loose.  Some reports mention a strong odor of ammonia, which could be from refrigeration machinery used in process cooling operations in the plant.  Whether or not vinyl chloride itself was released, the high death toll says several things about this accident.

First, one can ask why there were so many people in a hazardous area.  The trend in modern petrochemical operations is to reduce staffing to the point that in emergencies or during strikes, an entire plant can be operated safely from one central control room.  Although this is speculation, it is possible that Pemex, being owned by the Mexican government, has adopted a different policy and relies more on hands-on operators in its plants as a way of increasing government-paid employment.  Whatever the reason, Pemex's safety record is not good.  News reports of this accident relate that in 2012, 26 people were killed in a natural-gas facility owned by Pemex and in 2013, an explosion in Pemex's Mexico City facilities killed 37 people. 

Next, what kind of safety culture does Pemex have?  To run a complex petrochemical plant without accidents is a monumental task, and many safety priorities are expensive, in the sense that they take resources which otherwise could be used to enlarge the firm's bottom line.  With the recent crash in oil prices, there are reports that Pemex is cutting expenses, and this latest accident raises the question of whether safety has been sacrificed to budget considerations.

Finally, there is Pemex's status as a state-owned enterprise.  I am not familiar with Mexican law, but it is quite possible that it is either statutorily or practically difficult to sue Pemex.  Also, Pemex may be self-insured rather than purchasing hazard insurance on the open market.  Both of these factors, if true, remove two of the greatest incentives private firms have to run their operations safely:  fear of lawsuits from injured parties and financial pressure from private insurers to run a safe and low-claims operation.  Without such incentives, Pemex management has only its own integrity to rely on for worker safety, and the demands for sustaining profits in the face of falling oil prices may have overwhelmed safety concerns. 

I hope that the investigative bodies in Mexico have all the competence and authority they need, not only to get to the bottom of this tragedy, but to publicize its causes and assign responsibility wherever it needs to be assigned.  Again, the status of Pemex as a state-owned firm may lead to conflicts of interest between state officials who want to make workplaces safer, and other officials who do not want to see a state-owned enterprise called to account.  The loser in such a conflict will be the workers who have the choice of being paid to put their lives on the line in a hazardous workplace, or to go somewhere else and earn even less than the $12,000 US annual salary that was the average in 2005 for Mexican chemical engineers. 

If reports surface in English as to the cause of this accident, it will be interesting to learn whether poor safety practices contributed to it.  In the meantime, my sympathy goes to all of those who lost loved ones or were injured.  And I hope this latest incident leads to a re-evaluation of the entire safety culture of Pemex, which looks like it could use a lot of work.

Sources:  I referred to a Reuters report on the accident at http://www.reuters.com/article/us-mexico-pemex-idUSKCN0XH2N2, an ABC News report at http://abcnews.go.com/International/wireStory/death-toll-28-mexico-petrochemical-plant-explosion-38614458, a Fox News item at http://www.foxnews.com/world/2016/04/21/blast-at-mexico-petrochemical-plant-kills-3-injures-more-than-100.html, and a CNN report at http://www.cnn.com/2016/04/23/americas/mexico-pemex-petrochemical-blast/.  I also referred to statistics on PVC production at http://www.plasticstoday.com/study-global-pvc-demand-grow-32-annually-through-2021/196257501821043, a salary survey for Mexico at http://www.worldsalaries.org/mexico.shtml, and the Wikipedia articles on vinyl chloride and ethylene. 

Monday, April 18, 2016

Should We Mind Minecraft?


If you've been around teenagers at all in the last few years, or if you are one yourself, you've probably run across someone who plays Minecraft, the computer game invented in Sweden in 2009.  I first encountered it a few years ago when we were visiting my 13-year-old nephew in Kansas.  I sat behind him in his father's car and watched over his shoulder as he constructed some kind of structure with what to me looked like amazing speed and skill.  He showed me some of the elaborate buildings he'd made with it and explained how he played the game with friends who could send wild animal-like creatures his way.  It all sounded rather weird, but at the same time I was fascinated by the basic premise of the game:  unless you build it, it isn't there.

In this week's New York Times Magazine, Clive Thompson, author of the book Smarter Than You Think:  How Technology is Changing Our Minds For the Better, describes the origin, popularity, and multifaceted nature of Minecraft.  It appeals to both sexes and a wide range of ages, and in contrast to many slash-and-burn single-shooter-type games, parental attitudes toward it range mostly from the neutral to the favorable. 

Some people even say that playing Minecraft teaches kids useful skills, ranging from programming and logic design to three-dimensional visualization and the ability to deal with computer-aided design programs.  I suppose some education-psychology wonks will sooner or later divide a group of kids into Minecraft players and non-Minecraft players, and do a bunch of tests on them to see whether any of this is true.  Whatever the results are, I'm willing to go with the idea that Minecraft appeals to the creative part of one's personality, rather than the destructive part.  Although there can be plenty of destruction in Minecraft too—I've seen my nephew wipe out whole virtual city blocks and start over when things didn't go the way he wanted.

All the same, there's something about Minecraft that reminds me of an analogous trend from my own teenage years:  the golden age of electronics tinkering in the 1960s.  Transistors had just begun to replace the bulky, inefficient, and sometimes dangerous vacuum tubes, and for a few dollars spent at Radio Shack you could purchase hours of pleasant fiddling with amplifiers, oscillators, and logic circuits.  And I did. 

Thompson points out that one feature of Minecraft—"redstone"—acts basically like electric current, and you can build switches, relays, and highly complex logic circuits, all without ever having cracked a book on Boolean algebra.  He cites the case of Natalie, a fifth-grade girl, who he observes as she busily debugs her logic circuit when it fails to do exactly what she wants. 

This is good in some ways and not good in other ways, as I can explain from personal experience.

The childhood and teenage brain is never as plastic later as it is then.  Things you learn when you're 16 or younger are going to stay with you in a powerful way the rest of your life.  Depending on what you learn and how you learn it, this can be an unalloyed asset, a mixed asset and liability, or a liability.  With me, tinkering with electronics when I was young has turned out to have mixed results, although the balance sheet turned out to be positive.

Yes, I taught myself to do some pretty impressive things, like building a taped-program robot that could pick up things off the carpet of my room.  I also learned to use old junk as my supply depot instead of earning money to buy new stuff.  And as a kind of lone wolf of the electronics world, I grew up with no connection between what I was interested in and what the rest of the world happened to want.  As a result, my professional career in electronics had a firm technical foundation.  But I have also been plagued by what I recognize now is a bad habit of scrimping and making do with old junk around the lab, rather than asking for project money up front to do the job properly with state-of-the-art equipment.  And I have always had trouble making my own interests conform to what anybody else is interested in, which makes for problems when you try to get outside funding.

Yes, kids who devise what amounts to combinatorial logic circuits when they are ten years old will probably be able to do that pretty well in college, too.  "So that's what it's called!" they may say in their first digital-logic class, and go on to become brilliant computer scientists and designers.  On the other hand, when you reinvent the wheel on your own, you're not likely to approach the subject in a way that subsequent experience has shown to be the most efficient fashion.  People who teach themselves coding often write what college-trained programmers call "spaghetti code"—so tangled and needlessly complicated that nobody else can figure out what's going on, not even the person who wrote it, at least after a while.  So while learning system administration and coding and logic design when you're ten can be cool, you can also acquire some deeply ingrained habits that may turn out to be liabilities in the long run.

Alexander Woollcott, a radio personality of the 1940s, told the story of how the comedian Harpo Marx, after he became famous for his self-taught Broadway performances on the harp with his brothers' comedy team, decided one day he could finally afford harp lessons.  So Harpo found a professional harpist willing to teach him at ten dollars a half hour.  As Woollcott put it, ". . . the Maestro, having heard him play, swore there would be no way of his unlearning all the shockingly wrong things he knew about the harp."  Then the Maestro got Harpo to show him how Harpo did some things with the harp that the Maestro thought were not possible.  At the end of the half hour, Harpo paid his ten bucks, but as he'd been doing all the teaching, he never went back.

Not everybody who plays Minecraft is going to wind up as the Harpo of their techie generation.  And some of them may learn habits that will cause future teachers some distress, as the Maestro felt when he watched Harpo play.  But it's nice that at least one computer game out there invites you to get under the hood of the often opaque computer systems we live with so much and actually make something you can understand more or less completely, because you built it.  And if it breaks, you can try to fix it instead of just cussing the anonymous developers who should know better than to ship defective software. 

The inventor of Minecraft, Markus Persson, sold it for $2.4 billion to Microsoft in 2014 and washed his hands of the whole business after discovering that fielding thousands of inquiries from the millions of Minecraft fans wore him out.  But the thing he invented lives on, and I hope its career in the future will be as benign and instructional as it has been so far. 

Sources:  The article "The Minecraft Generation" by Clive Thompson appeared in the online New York Times Magazine on Apr. 17, 2016 at http://www.nytimes.com/2016/04/17/magazine/the-minecraft-generation.html.  The story (possibly apocryphal) of Harpo's harp lessons appeared in the March 1926 issue of Vanity Fair magazine, the text of which is accessible at http://www.vanityfair.com/news/1926/03/harpo-marx-theater-music.  And at last report, my nephew was running a YouTube channel with a microphone we bought him for Christmas, giving advice to other Minecraft players online.

Monday, April 11, 2016

Will Robots Ever Have Moral Authority?


Robots build cars, clean carpets, and answer phones, but would you trust one to decide how you should be treated in a rest home or a hospital?  That's one of the questions raised recently by a thoughtful article in the online business news journal Quartz.  Journalist Olivia Goldhill interviewed ethicists and computer scientists who are thinking about and working on plans to enable computers and robots to make moral decisions.  To some people, this smacks of robots taking over the world.  Before you get out the torches and pitchforks, however, let me summarize what the researchers are trying to do.

Some of the projects are nothing more than a type of expert system, a decision-making aid that has already found wide usefulness in professions such as medicine, engineering, and law.  For example, the subject of international law can be mind-numbingly complicated.  Researchers at the Georgia Institute of Technology are trying to develop machines that will ensure compliance with international law by programming in all the relevant codes (in the law sense) so that the coding (in the computer-science sense) will lead to decisions or outcomes that automatically comply with the pertinent statutes.  This amounts to a sort of robotic legal assistant with flawless recall, but one that doesn't make final decisions on its own.  That would be left to a human lawyer, presumably.

Things are a little different with a project that a philosopher Susan Anderson and her computer-scientist husband Michael Anderson are working on:  a program that advises healthcare workers caring for elderly patients.  Instead of programming in explicit moral rules, they teach the machine by example.  The researchers take a few problem cases and let the machine know what they would do, and after that the machine can deal with similar problems.  So far it's all a hypothetical academic exercise, but in Japan, where one out of every five residents is over 65, robotic eldercare is a booming business.  It's just a matter of time until someone installs a moral-decision program like the one the Andersons are developing in a robot that may be left on its own with an old geezer, such as the writer of this blog, for example.

What the Quartz article didn't address directly is the question of moral authority.  And here is where we can find some matters for genuine concern.

Many of the researchers working on aspects of robot morality evinced frustration that human morality is not, and may never be, reducible to the kind of algorithms that computers can execute.  Everybody who has thought about the question realizes that morality isn't as simple and straightforward as playing tick-tack-toe.  Even the most respected human moral reasoners will often disagree about the best decision in a given ethical situation.  But this isn't the fundamental problem in implementing moral reasoning in robots.

Even if we could come up with robots who could write brilliant Supreme Court decisions, there would be a basic problem with putting black robes on a robot and seating it on the bench.  As most people will still agree, there is a fundamental difference in kind between humans and robots.  To avoid getting into deep philosophical waters at this point, I will simply say that it's a question of authority.  Authority, in the sense I'm using it, can only vest in human beings.  So while robots and computers might be excellent moral advisers to humans, by the nature of the case it must be humans who will always have moral authority and who make moral decisions. 

If someone installs a moral-reasoning robot in a rest home and lets it loose with the patients, you might claim that the robot has authority in the situation.  But if you start thinking like a civil trial lawyer and ask who is ultimately responsible for the actions of the robot, you will realize that if anything goes seriously wrong, the cops aren't going to haul the robot off to jail.  No, they will come after the robot's operators and owners and programmers—the human beings, in other words, who installed the robot as their tool, but who are still morally responsible for its actions. 

People can try to abdicate moral responsibility to machines, but that doesn't make them any less responsible.  For example, take the practice of using computerized credit-rating systems in making consumer loans.  My father was a loan officer at a bank in the 1960s before such credit-rating systems came into widespread use.  He used references, such bank records as he had access to, and his own gut feelings about a potential customer to decide whether to make a loan.  Today, most loan officers have to take a customer's computer-generated numerical credit rating into account, and the job of making a loan is sometimes basically a complicated algorithm that could almost be executed by a computer. 

But automation did not stop the banking industry from running over a cliff during the housing crash of 2007.  Nobody blamed computers alone for that debacle—it was the people who believed in their computer forecasts and complex computerized financial instruments who led the charge, and who bear the responsibility.  The point is that computers and their outputs are only tools.  Turning one's entire decision-making process over to a machine does not mean that the machine has moral authority.  It means that you and the machine's makers now share whatever moral authority remains in the situation, which may not be much.

I say not much may remain of moral authority, because moral authority can be destroyed.  When Adolf Hitler came to power, he supplanted the established German judicial system of courts with special "political courts" that were empowered to countermand verdicts of the regular judges.  While the political courts had power up to and including issuing death sentences, history has shown that they had little or no moral authority, because they were corrupt accessories to Hitler's debauched regime.

As Anglican priest Victor Austin shows in his book Up With Authority, authority inheres only in persons.  While we may speak colloquially about the authority of the law or the authority of a book, it is a live lawyer or expert who actually makes moral decisions where moral authority is called for.  Patrick Lin, one of the ethics authorities cited in the Quartz article, realizes this and says that robot ethics is really just an exercise in looking at our own ethical attitudes in the mirror of robotics, so to speak.  And in saying this, he shows that the dream of relieving ourselves of ethical responsibility by handing over difficult ethical decisions to robots is just that—a dream. 

Sources:  The Quartz article "Can We Trust Robots To Make Moral Decisions?" by Olivia Goldhill appeared on Apr. 3, 2016 at http://qz.com/653575/can-we-trust-robots-to-make-moral-decisions/.  (I thank my wife for pointing it out to me.)  The statistic about the number of aged people in Japan is from http://www.techinsider.io/japan-developing-carebots-for-elderly-care-2015-11, and my information about Hitler's political courts appears on the website of the Holocaust Memorial Museum at https://www.ushmm.org/wlc/en/article.php?ModuleId=10005467.  Victor Lee Austin's Up With Authority was published in 2010 by T&T Clark International.

Monday, April 04, 2016

Learning from the Kolkata Overpass Collapse


On Thursday Mar. 31, around noon, the busy Rabindra Sarani-KK Tagore Street crossing in the city of Kolkata, India (population 4.5 million) was crowded with shoppers and people having lunch in open-air eateries.  Crowds that a Westerner would consider to be a mob scene are routine in the Indian subcontinent, and the density of street-level shops makes many thoroughfares almost impassible by automobile.  To alleviate this congestion, in 2008 the Hyderabad-based construction conglomerate IVRCL won a bid to construct an overpass that would carry vehicular traffic above the existing street.  Construction began in 2009 and was due for completion in 2012.  But the firm ran into financial and land-acquisition difficulties, with consequent project delays, and so last week one of the last parts of the projected 2+ kilometer-long overpass was still under construction above the street.

By Wednesday, Mar. 30, a long straight section of the overpass was complete, and concrete was poured that night for a section next to a turn at the crossing, where steel girders already were suspended above the road.  At about 12:25 PM Thursday, some 300 feet (100 meters) of the overpass collapsed onto the street below.  As of Apr. 2, the death toll stood at 27, but more were missing and over 100 people were injured. 

While the cause of the collapse is under investigation, the IVRCL firm has been charged with culpable homicide and three members of the firm have been arrested.  This is after one firm representative termed the collapse "an act of God."

The construction phase of any large civil-engineering project is fraught with hazards that only good planning and expert supervision at all times can avoid.  As a civil-engineering professor interviewed about the tragedy pointed out, right after a poured-concrete structure is set in place, the weight of the newly poured material must be supported by temporary scaffolding before the concrete sets.  In contrast to the finished product, which office-based engineers can design at their leisure to withstand known stresses, temporary scaffolding is erected onsite in an ad-hoc way, and may have hidden defects that would require more engineering knowledge to avoid than the onsite construction workers and supervisors have.  It was apparently one such defect that led to the disaster in Kolkata last week.

From videos shot during the collapse, it appears that few if any pedestrian or vehicle barriers were in place to keep people away from the construction site.  Admittedly, this would have been difficult, like temporarily shutting down Times Square in New York City for construction.  And businesses on the street undoubtedly would have complained if large sections of the surface street had been blocked off, impairing access to some shops.  But events have proved that the tradeoff would have been worth it, if excluding traffic from under the most hazardous parts of the overpass during construction would have saved lives.

While some commenters on Indian news sites complained that such things are never allowed to happen in the so-called First World, only a year ago I reported in this space about a similar but smaller-scale accident involving overpass construction, right here in Texas.  While a prefabricated-concrete-beam overpass was being built over the busy I-35 freeway near Salado, Texas, a truck carrying an overheight load struck one of the beams before it had been firmly fixed in place.  It shifted and knocked down several other beams, one of which killed the driver of a pickup truck.  Again, this accident could have been prevented by diverting traffic from underneath the overpass, but the result would have been permanent miles-long backups on I-35 that might have provoked angry citizens to mount a protest march at the Texas Department of Transportation. 

Any complex engineering project is a series of compromises with safety, expenses, schedules, personnel, and other resources all in the mix.  In the West, a relative abundance of resources has led engineering organizations to err on the side of more money traded for more safety.  In India, as the comparatively poor track record of fatal building and construction collapses attests, getting the project done cheaply sometimes takes priority over getting it done safely.  India is a democracy, and it may be that the current level of construction safety reflects an increased urgency to solve the nation's civil-engineering needs faster and with fewer resources than Western-style engineering would allow.  It is bad enough when a privately-owned building collapses.  But a public-works project such as an overpass inherently affects more people, and carries more potential for harm.  This is why most public-works project specifications require licensed professional engineers to supervise the design phase.  But the best designers in the world will be unable to prevent onsite accidents if the people who actually do the construction are not capable of understanding the hazards and engineering challenges involved.

At least three members of the IVCRF firm have been arrested in connection with the tragedy and charged with culpable homicide.  The degree to which they are responsible is now going to be determined by the legal process, which may take months or years.  Regardless of the fate of the engineers and managers involved in this accident, to prevent future tragedies like this a sea change will have to take place in the entire construction industry in India. 

I have mentioned before a simple safety code that was once emblazoned on bronze plaques in Bell System telephone exchanges throughout the U. S.:  "No job is so important and no service is so urgent—that we cannot take time to perform our work safely."  That was back when the Bell System was a monolithic nation-like organization, and it could afford hundreds of bronze safety plaques.  But everyone working in a business that creates potential hazards for its own employees, and especially for innocent bystanders, can afford to make the Bell System safety creed their own.  And something like this could go a long way toward making Indian construction sites and buildings safer places to be.

"A Bridge Too Close," about the I-35 accident appeared on Mar. 29, 2015. 

Monday, March 28, 2016

Drone Delivers to Doorstep: What Next?


Last Friday, Mar. 25, the Nevada startup Flirtey announced that it had made the first successful package delivery to a residential area in the U. S. with an autonomous drone (not steered by a person on the ground).  The demonstration flight, which was completed Mar. 10, carried a package of emergency supplies half a mile through the air to the porch of a vacant house outside Reno, Nevada.  Although large corporations such as Amazon and Wal-Mart have been toying with the idea of drone deliveries, Flirtey attributed its first to experience it has gained with similar tests in Australia and New Zealand.  It turns out that several other countries are more welcoming to commercial drones than the U. S., where strict FAA rules are still in place that are limiting commercial drone operations involving deliveries to test flights such as this one.

What does this achievement mean for a number of groups that may be affected by it:  consumers, companies in the delivery business, and people who earn a living delivering packages?

First, the consumer.  Whenever I thought of drone delivery in the past, I couldn't help but imagine how things could go wrong:  inadvertent haircuts from the propeller blades, for example.  Flirtey plans to avoid this sort of thing by keeping the drone itself at an altitude of around 40 feet (12 meters) while the package itself is lowered to the ground on a retractable cord leading to some sort of grappling hook that releases when the package hits the ground.  So unless you're asleep on the porch and the drone happens to land your box of live Maine lobsters on your head, chances are small that the drone will run afoul of living creatures on the ground.  Birds are another matter, of course, but I'm sure the Flirtey engineers have ways of dealing with them too. 

Although an engineer was killed in an accident involving a large experimental drone in 2013, no injuries or fatalities have so far resulted from a civilian drone colliding with a standard aircraft.  The FAA would like to keep it that way, and news reports of the Flirtey flight also mention that NASA is working on air-traffic-control software for drones.  It's possible that the authorities will work out something like the present direction-altitude rules for large-scale aircraft, but on a smaller scale.  Commercial pilots follow the "odd north east" rule:  if your plane's heading is anywhere from north to east to south, your altitude must be an odd number of thousand feet plus 500 feet, and if your bearing is westerly, you have to be at an even number of thousand plus 500.  So it would be easy to make a similar rule for tens of feet instead of thousands for drones.  It wouldn't solve every potential collision problem, but it would help.

Large organizations whose business includes deliveries of small packages are eagerly awaiting the day when they can take advantage of drones.  While computerized scheduling and routing has improved the efficiency of manned delivery operations, the actual physical delivery process of packages to homes hasn't changed much since the invention of the automobile.  Currently, the FAA rules require that delivery drones always be within sight of the operator.  That's going to involve an operator for a while yet, but you can picture one delivery guy getting a lot more done with the help of two or three drones in a densely populated neighborhood.  Of course, a package on a string can't go into an apartment complex and take the elevator to the 14th floor, but you've got to start somewhere.  So the initial operations will probably be a hybrid thing, with the delivery driver going to a central location, loading drones, and sending them to do the last run of a few hundred feet to individual houses.

Inevitably, that will lead to layoffs among delivery personnel, although with the seasonal nature of the delivery business, at first it might just mean that UPS and similar services won't hire as many temps during the Christmas rush as they used to—they'll just add more drones.  But if the rules eventually allow more nearly autonomous operation of drones, the unattended parts of the flights will be longer, and fewer live drivers will be needed.  And one more type of job that is currently open to someone with only a high-school education will become history.

This is not unalloyed bad news.  The nation survived the demise of the milkman in most parts of the country, and before that the iceman.  But as the current election cycle is demonstrating, for some time now the U. S. economy has been doing a fairly poor job of employing people with less than a college education, and there are lots of people out there who feel that they have gotten the short end of the economic stick.  And a good many college-educated workers with degrees in non-professional areas are underemployed, doing jobs for which they are overqualified.  This is not the place to go into this complex and many-faceted problem, but we simply note that technology is often a destabilizing force.  If you are stably under the thumb of a dictatorship, destabilizing can be good.  But just making things less stable by itself is not always helpful. 

It doesn't look like we will be getting packages from Federal Express floating down from the sky any time soon.  For whatever reason, the FAA has decided to make haste slowly on commercial drones, while other countries speed ahead.  That may give time for the job market to readjust more gradually to the future realities of the delivery business, however it is affected by the advent of drones.  The fact that the first package delivered was emergency supplies reminds us that there are disaster scenarios for which delivery drones will be a Godsend.  And nobody should resent that.

Sources:  Numerous outlets carried the news of Flirtey's accomplishment. I referred to reports on the websites of the Christian Science Monitor at http://www.csmonitor.com/Business/2016/0326/Startup-Flirtey-drone-delivery-is-good-news-for-nacent-industry (by the way, the word meaning budding or fledgling is spelled "nascent," not "nacent"), and Fortune at http://fortune.com/2016/03/25/flirtey-drone-legal-delivery-urban/.  I also referred to the Lapeer Aviation website http://www.lapeeraviation.com/odd-north-east/ for information about the "odd-north-east" rule. 

Monday, March 21, 2016

AlphaGo Defeats Human Go Champion: Go Figure


First it was chess:  world champion Garry Kasparov lost a contest of five games to an IBM computer named Deep Blue in 1997.   And now it's the game called Go, which has been popular in Asia for centuries.  Earlier this month, Korean Go champion Lee Sedol lost four out of a series of five games in a match with AlphaGo, a computer program developed by Google-owned London firm DeepMind.  But Sedol says he now has a whole new view of the game and is a much better player from the experience.  This development raises some perennial questions about what makes people human and whether machines will in fact take over the world once they get smarter than us.

As reported in Wired, the Go match between Lee Sedol and AlphaGo was carried on live TV and watched by millions of Go enthusiasts.  For those not familiar with Go (which includes yours truly), it is a superficially simple game played on a 19-by-19 grid of lines with black and white stones, sort of like an expanded checkerboard.  But the rules are both more complicated and simpler than checkers.  They are simpler in that the goal is just to encircle more territory with your stones than your opponent encircles with his.  They are more complicated in that there are vastly more possible moves in Go than there are in checkers or even chess, so strategizing takes at least as much brainpower in Go as it does in chess. 

It's encouraging to note that even when Sedol lost to the machine, he could come up with moves that equalled the machine's moves in subtlety and surprise.  Of course, this may not be the case for much longer.  It seems like once software developers show they can beat humans at a given complex task, they lose interest and move on to something else.  And this shows an aspect of the situation that so far, few have commented on:  the fact that if you go far enough back in the history of AlphaGo, you find not more machines, but humans.

It was humans who figured out the best strategies to use for AlphaGo's design, which involved making a lot of slightly different AlphaGos and having them play against each other and learn from their experiences.  Yes, in that sense the computer was teaching itself, but it didn't start from scratch.  The whole learning environment and the existence of the program in the first place was due, not to other machines, but to human beings. 

This gets to one of the main problems I have with artificial-intelligence (AI) proponents who see as inevitable a day when non-biological, non-human entities will, in short, take over.  Proponents of what is called transhumanism, such as inventor and author Ray Kurzweil, call this day the Singularity, because they think it will mark the beginning of a kind of explosion of intelligence that will make all of human history look like mudpies by comparison.  They point to machines like DeepBlue and AlphaGo as precursors of what we should expect machines to be capable of in every phase of life, not just specialized rule-bound activities like chess and Go. 

But while the transhumanists may be right in certain details, I think there is an oversimplified aspect to their concept of the singularity which is often overlooked.  The mathematical notion of a singularity is that it's a point where the rules break down.  True, you don't know what's going on at the singularity point itself, but you can handle singularities in mathematics and even physics as long as you're not standing right at the point and asking questions about it.  I teach an electrical engineering course in which we routinely deal with mathematical singularities called poles.  As long as the circuit conditions stay away from the poles, everything is fine.  The circuit is perfectly comprehensible despite the presence of poles, and performs its functions in accordance with the human-directed goals set out for it. 

All I'm seeing in artificial intelligence tells me that people are still in control of the machines.  For the opposite to be the case—for machines to be superior to people in the same sense that people are now superior to machines—we'd have to see something like the following.  The only way new people would come into being is when the machines decide to make one, designing the DNA from scratch and growing and training the totally-designed person for a specific task.  This implies that first, the old-fashioned way of making people would be eliminated, and second, that people would have allowed this elimination to take place. 

Neither of these eventualities strikes me as at all likely, at least as a deliberate decision made by human beings.  I will admit to being troubled by the degree to which human interactions are increasingly mediated by opaque computer-network-intensive means.  If people end up interacting primarily or exclusively through AI-controlled systems, the system has an excellent opportunity to manipulate people to their disadvantage, and to the advantage of the system, or whoever is in charge of the system. 

But so far, all the giant AI-inspired systems are all firmly under the control of human beings, not machines.  No computer has ever applied for the position of CEO of a company, and if it did, it would probably get crossways to its board of directors in the first few days and get fired anyway.  As far as I can tell, we are still in the regime of Man exerting control over Nature, not Artifice exerting control over Man.  And as C. S. Lewis wrote in 1947, ". . . what we call Man's power over Nature turns out to be a power exercised by some men over other men with Nature as its instrument." 

I think it is significant that AlphaGo beat Lee Sedol, but I'm not going to start worrying that some computerized totalitarian government is going to take over the world any time soon.  Because whatever window-dressing the transhumanists put on their Singularity, that is what it would have to be in practice:  an enslavement of humanity, not a liberation. And as long as enough people remember that humans are not machines, and machines are made by, and should be controlled by, humans, I think we don't have to lose a lot of sleep about machines taking over the world.  What we should watch are the humans running the machines.

Sources:  The match between Lee Sedol and AlphaGo was described by Cade Metz in Wired at http://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/.  I also referred to the Wikipedia articles on DeepBlue, Go, and AlphaGo.  The quotation from The Abolition of Man by C. S. Lewis is from the Macmillan paperback edition of 1955, p. 69.