Monday, January 09, 2017
The citizens of the U. S.'s most populous state have long had a love affair with the automobile. Life in Los Angeles is well-nigh impossible without wheels of some kind, and many commuters spend almost as much time in their cars as they do on the job. As of Jan. 1, it is illegal in the state of California to use your mobile phone while driving unless you use hands-free technology. Fortunately for the millions who will now have to find some other way to communicate from their cars, the automakers are rushing to integrate voice-recognition systems such as Amazon's Alexa into their products so that you can simply ask for directions or ask to talk to a friend, and the system will do the rest.
As reported in a recent New York Times article, Ford announced that Alexa will soon be a feature of its newest hybrid models later this year. A mobile Internet connection is vital to the new service, which counts on using cloud computing for the often computationally-intensive task of voice recognition. The same Internet connection will be used for many of the services accessed by the software: online purchases, remote control of "Internet-of-Things" devices, and many other uses besides the obvious ones of telephone service and GPS guidance.
The new law is a step forward in the struggle to reduce traffic accidents caused by distracted driving. But we have yet to see what the effects of a well-functioning voice-recognition system in a car may be in terms of safety.
Studies have shown that visual distractions can be deadly to drivers, while sounds are much less so. Most people can carry on an animated conversation with a passenger without being too distracted from driving, and it's reasonable to assume that conversations with voice-recognition software will not be much more distracting than having a live passenger beside you. Still, depending on the usefulness and accuracy of the system and the number and complexity of features, things could get complicated.
Your scribe here lives such a sheltered life that the closest I've come to an Alexa is seeing the ad for it every time I click onto Amazon.com. So I am not in a position to pass judgment personally on how well they work. Apparently they work well enough to have made Amazon a lot richer in the past year or so, and the quality trend as more artificial-intelligence resources are applied to these things will only be upward. Like many other new technologies, the real challenge in growing the market won't be so much technical as it will be changing peoples' habits. And the California law is a powerful incentive to do so.
Consumers lie on a spectrum with regard to the adoption of new technologies. Some folks—often younger ones—are early adopters who are the ones who wait in line all night long to be the first to buy a new iPhone or what have you. The bulk of us don't rush out right away to get every latest thing, but when friends or acquaintances tell us about the item and how pleased they are with it, we go ahead and buy one when our old one wears out or when some business or personal need makes it better to buy than not. And then, bringing up the rear of the bell curve, there are late adopters such as myself, who cling to old technologies with a grip that often takes legal force to loosen.
There's no need to spend much marketing effort on early adopters—they often turn out to be a product's best informal salespeople as they show off their new purchases to others. The major challenge is getting the average person to change their ways in the face of a new technology. And California has done the automakers and the voice-recognition people a big favor in passing their hands-off-the-phone law.
Casual observation shows that a large fraction, if not a majority, of people who drive also like to talk on the phone at the same time. If they haven't already adopted hands-free technology, as of this month, in California at least, they'll have to do something in order to avoid the threat of getting a ticket. Enforcement is going to be lax at first, but the understanding is that this is just a grace period to give people time to adopt a new way of phoning while driving, and eventually you'll have to be using some kind of voice-recognition system, whether it's in your phone or installed in your vehicle.
For people such as real-estate agents, maintenance providers, and others who drive around all day and have to be in touch with customers, the new law is just part of having to do business, and they will either buy a car with a built-in system or achieve their goal some other way, if they haven't already.
For others who have not made a habit of talking on the phone while driving, the law will mean either pulling off the road when their hands-on phone goes off, or ignoring it until reaching one's destination.
Eventually, though, such actions will seem as quaint as hunting around for a pay phone to make a phone call. The last time I saw a working pay phone was last summer on a drive through a small Nebraska town. If I recall correctly, the same town also had a small operating movie theater in the middle of town, and a factory near the edge of town that made lawnmowers. I didn't see any signs saying "Caution — Entering the Twilight Zone" but it gave me that feeling.
The California law, and the automotive voice-recognition systems that will allow people to abide by it, are all part of the push to make us constantly connected whether we're at home, at work, or in between. It's what people seem to want, or at least think they want. Why they think they want it is another question, but one best left for another time.
Sources: The New York Times article "Coming From Automakers: Voice Control That Understands You Better" by Neal F. Boudette and Nick Wingfield appeared on Jan. 5, 2017 at http://www.nytimes.com/2017/01/05/automobiles/automakers-voice-control-amazon-alexa.html.
Monday, January 02, 2017
We are now well into the era of cyberwarfare—the use of computers and computer networks in military, terrorist, and diplomatic conflicts. But to judge by the recent tiff between President Obama and Russian President Vladimir Putin, neither the U. S. nor Russia has figured out exactly how to use these new weapons, or how to defend against them effectively.
Last July, Wikileaks unleashed a flood of embarrassing emails hacked from the Democratic National Committee, leading to the resignation of that organization's chairwoman Debbie Wassermann Schultz and undoubtedly influencing the Presidential selection process, though to what degree it is impossible to say. In December, the CIA announced that they were confident that Russian hackers were responsible for stealing the emails and giving them to Wikileaks. And on Dec. 23, President Obama announced that he was retaliating for the hacks by sending home 35 Russian diplomats and taking other actions against the Russian diplomatic corps in the U. S. After initial talk by Russian officials of retaliation against the retaliation, Russian President Vladimir Putin surprised many by saying he would suspend any actions against U. S. diplomats in Russia, at least until the Trump administration takes office.
Retaliation against diplomats has been around ever since there have been diplomats. Over the decades, countries have developed traditional ways of treating official representatives from foreign lands with policies such as diplomatic immunity from routine prosecution, the suspension of normal customs inspection for diplomatic materials, special diplomatic zones around embassies, and other perks. But one reason for all these special privileges is that they can be revoked at any time.
This writer is old enough to recall some of the many times that the old Soviet Union (USSR) engaged in these kinds of games with the U. S. on any pretext or sometimes no pretext at all. It was all part of the Cold War chess game, and watched closely for indications that the Soviets might be wanting to warm up the war a little. Everyone agrees that sending a diplomat packing is a lot better than throwing bombs, so while tensions are raised by such incidents, it's usually a sign that serious conflicts are not in the immediate offing.
Still, there are a couple of notable and disturbing aspects of the DNC hacks and their consequences. One concerns the identity of the hackers, and the other concerns what constitutes a truly effective response to such attacks.
It took nearly six months for the CIA to be confident enough to announce publicly that Russians were in fact responsible. In that aspect, hacking and other hard-to-trace cyberattacks resemble terrorism, in that the identity of the terrorists responsible for a given attack is usually not immediately known, and may not ever be discovered. Although good detective and investigative work often uncovers the perpetrators eventually, the delay between the attack and the discovery of who did it allows for uncertainty to dominate the situation, leading to general confusion, controversy, and other problems that are usually exactly what the attacker wants to achieve in the enemy camp. It's possible that the CIA made its announcement when it did not because it took all that long to figure out who did it, but for other diplomatic or political reasons. Still, it's hard to fight back against an enemy if you don't know who he is.
Identifying the source of a cyberattack is only the first step in an effective response. As in conventional warfare, one doesn't want to overreact, but on the other hand, just letting an enemy get away with anything isn't good either. An important factor in these not-yet-open-warfare conflicts is how the public perceives them. Both the U. S. and the Russian presidents do everything with an eye to their constituents, so things done in secret which have secret effects are not that useful. Instead of using the hacked emails for their own purposes, whoever hacked them (probably the Russians) gave them maximum publicity, and to the extent that the DNC was hampered in its operations, the attack was a success.
What's new and disturbing about this particular incident is that it represents a significant intrusion into the domestic electoral process by a foreign power which overtly favored a particular candidate—one who will take office on Jan. 20, barring unforeseen circumstances. What makes the situation worse is that the President-elect does not seem to be all that troubled about it. Four years in office is a long time, though, and it's likely that Trump and Putin will at some point fail to agree on something, after which it's anyone's guess what will happen.
Part of what makes it so hard to defend against cyberattacks is the global nature of the Internet environment—Moscow or Paris or Adelaide is just as close to my Internet connection as the neighbor down the street. Traditional military defenses were geographically fixed and you could draw contours of safety within them—here, you have to be concerned about ground attacks, there you are subject to air bombings, and way back behind the front lines, there was almost nothing to worry about. But cyberattacks can go anywhere there's an Internet connection, and the targets are often only as well-defended as the private organizations and their IT people can make them. As we know, these defenses range from the almost impregnable to the nearly nonexistent, and so many attractive cyber-targets are almost defenseless against a concerted attack by well-resourced agents of a foreign power.
It's not clear that the best defense is a good offense either, especially when it's not immediately clear who is doing the attacking. And when many thefts of data are not discovered until months or years after the damage is done, it's even harder to mount an effective response.
It looks like international cyberwarfare will muddle along in this confused state unless and until such a major attack occurs that we get serious about some sort of national defense policy against foreign cyberwarfare. There are serious concerns being voiced these days about the hacking of power grids and other vital infrastructure systems such as air-traffic control and the domestic Internet itself. Our best defense for these systems right now is that nobody has a strong reason to attack them, but that could change at any time. And if it does, I just hope we're ready for what comes afterwards.
Sources: I referred to a report on President Obama's retaliatory actions against Russia carried by CNN on Dec. 29 at http://www.cnn.com/2016/12/29/politics/russia-sanctions-announced-by-white-house/, and also a report on Putin's non-response at https://www.washingtonpost.com/world/russia-plans-retaliation-and-serious-discomfortoverus-hacking-sanctions/2016/12/30/4efd3650-ce12-11e6-85cd-e66532e35a44_story.html.
Sunday, December 25, 2016
In 1936, during the depths of the Great Depression, a professor of physical chemistry at Yale named Clifford C. Furnas published a book in which he tried to anticipate the next great advances in science and engineering during the following century. His book was inspired by a visit he made to the Chicago World's Fair in 1933, otherwise known as the "Century of Progress Exposition," which marked the 100-year anniversary of the founding of Chicago. A lot of the technical exhibits that were designed to show how the world of tomorrow would be better than the depression of today didn't work properly, and so he went home and surveyed the state of science, engineering, and technology and made his best guesses as to how things would be by 2033, appropriately entitling it The Next Hundred Years.
My interest isn't so much in the accuracy of his technical predictions as in his expectations for what the trend of automation would yield for the economy and the working life of the average citizen. It was already obvious by 1933 that a lot of jobs formerly done partly or wholly by hand up to then would be performed by machines or even robots in the future. But what Furnas missed, along with nearly every other prognosticator up to the end of World War II, was the rise of the electronic computer, computer networking, and the growth in Internet-based economic activity. And without the computer, modern robotics would be impossible, because without digital control systems (now including artificial intelligence), a robot can't do anything much more than act as a power-assist to a human being.
What we're talking about is the rise in what economists call productivity: the economic output of a nation divided by the number of hours worked. One person using a small lathe and a few hand tools can build a watch in maybe a few dozen hours, depending on what they start with. But one person at the controls of an otherwise fully automated watch factory can make hundreds or thousands of watches per hour. And Furnas was right in his prediction that advances in automation would (a) greatly increase the productivity of the average worker, and (b) render obsolete entire classes of jobs that previously employed millions of people.
Where he went wrong was his prediction about what the result of these changes would be.
In Furnas's view, the average man (he barely discussed women at all), when faced with a choice of working 40 or 50 hours a week for ever-increasing pay, or else getting paid the same wages for less and less work, would choose to work less and get paid the same amount for it. Consequently, the great challenge he foresaw for the future was to find things for people to do with all their spare time, now that their jobs could be done in as little as one or two hours a day. He summarized the difficulty thus: "Our problem will be to keep the citizenry on even keel while they have a wealth of time on their hands, for certainly a society steeped in mere idleness will soon lose its moral fiber, its material possessions and its reasons for existence."
Why didn't things turn out that way? Why isn't the U. S. a peaceful country full of debating societies, painting groups, and volunteer choirs, instead of harboring an increasingly divided populace in which some better-educated folks live a life of relative freedom and interesting work, while most people without advanced degrees work longer and longer hours in uncertain dead-end jobs (sometimes two or three jobs at once) and feel they can barely get by? And don't forget the growing class of working-age men who have simply resigned from the workforce altogether and spend their days playing video games and in other forms of, in Furnas's words, "mere idleness."
A complete answer to these questions would require a book, or several books by a group of experts with talents that I lack. But in my 300 words or so remaining, I'll hazard a few guesses.
One answer will sound paradoxical: the rise in the standard of living. The phrase "keeping up with the Joneses" captures some of this idea. For Furnas's vision of the leisure class to come to pass, it wouldn't do for just a few people to choose shorter working hours over more pay—most of the country would have to do it. And in the hyper-competitive international economic arena, a country in which most of its working people work only two hours a day would lag behind countries where 40 or 50 hours a week was the norm.
Another answer is that people are, frankly, greedy. And greed, at least of the mildly acquisitive type, is the engine that fuels advertising and consumer economies such as in the U. S. and most other industrialized nations these days. There are a few people who choose to live on next to nothing and cut themselves off from the grid, but most of us regard them as eccentrics at best and dangerous at worst.
A third factor is what I call "building-code creep." If you attempted to build a house today in the way a modestly-priced house was built in 1930, you would be violating nearly every building code in the book. Where's the third wire for grounding the outlets? Where's your insulation, air conditioning, smoke alarms? What's all this lead paint doing here? That gas water heater has no automatic flameout-protection valve. In thousands of ways that have made life safer and more convenient, we have changed the rules of material life so that it costs a great deal more to live simply than it used to. In certain rural parts of the country, most if not all of these things can be skipped, but at the price of living dangerously.
For a variety of reasons, we seem to be entering a period in which increasing numbers of people in the U. S. choose to live without jobs. But most of them don't seem to be happy about it, and I think Furnas was on to something when he expressed concern about the deteriorating moral fiber of a nation where idleness becomes a way of life for many people. The key, if there is one, lies in the phrase "reasons for existence," but that is a topic for another blog.
Sources: Clifford C. Furnas's The Next Hundred Years was published in 1936 by Reynal & Hitchcock, New York. The quotation about keeping citizens on an even keel is from p. 367. I previously referred to this book in my blog on Sept. 23, 2013, "Engineers and Technological Unemployment: What Are People For?"
Monday, December 19, 2016
The other day I was making some hotel reservations, and set them up with two different hotel chains. One is universally pet-friendly (we often travel with a dog), and you can call the hotel you want to stay at and talk with the desk clerk directly to make your reservation. The clerk gets into their reservation system and takes your information and usually there's no problem, although if you call at a busy time it can be a little stressful on the clerk.
The other chain makes all phone reservations through a centralized phone system—if you call the individual motel, the desk clerk transfers you to the same reservation number you can call directly. Recently this chain transitioned to a computerized voice-recognition system—your voice is unheard by human ears when you dial the number. It didn't go well.
I suppose those familiar with the robotic phone-tree industry could name the company that makes this system by the way it sounds. It has a friendly female voice saying, "Okay, what can we do for you? Tell me if you want to make a reservation," etc. At first I hoped I'd eventually get to talk with a live human, because my experience with these robot voices has been mixed at best. Maybe it's my tone of voice, maybe it's my Southern background, but unless the computer is asking for simple yes-or-no answers, I don't have much luck with them.
It asked me for the place I wanted to stay and what day and how many nights. I tried to tell it—twice, in fact—but all I got back was this peculiar fast clicking ("pip-pop-pip-pop") which I have to believe is what the system puts on the line instead of Muzak while it's trying to puzzle out what you said, and then it asked the same question all over again. Finally I hung up and used the chain's website to make the reservation, which may be what they want people to do anyway—I'm sure it's a lot less trouble to them than their robot telephone operators.
This is an up-close and personal encounter with something that is only going to get worse—or better, depending on your point of view—in the future. I'm talking about the replacement of people with technology in a wide variety of jobs. In a recent issue of The New Yorker magazine, Elizabeth Kolbert reviews a number of books concerned with the recent advances in artificial intelligence (AI), and the effects this is going to have on the the job market, the economy, and society in general.
This isn't going to happen overnight. Paradoxically, it's easier to program a computer to diagnose certain types of diseases with expert systems than it is to teach one how to fold towels. Kolbert cites an experiment at U. C. Berkeley with a robot that learned to fold a towel—after practicing, it got its time down to twenty-five minutes per towel. In that regard, at least, Rosie the Robot isn't going to replace hotel housemaids any time soon.
On the other hand, if you work in a phone-answering "boiler room," you have reason to be worried, although my own experience with the robotic reservation clerk shows there is still a place for humans on the other end of the line. Kolbert classifies jobs into four types: manual routine jobs (e. g. folding towels or working on an assembly line), cognitive routine jobs (e. g. keeping track of a warehouse inventory), manual nonroutine jobs (e. g. home health care or brain surgery), and cognitive nonroutine jobs (e. g. developing a new AI system). Both types of routine jobs, where you can basically write an algorithm about what to do in any given situation, are ripest for replacement by robots and AI software.
The fear that humans will lose their jobs to machines goes back at least to the 1700s, when mechanical looms and spinning jennies began to replace weavers and the one-person spinning wheel. But until recently, industrialization produced at least as many new jobs as the old ones it eliminated, if not more.
The problem now is that many new firms that attract billions in capital now operate with essentially nobody. Kolbert cites an extreme example: the messaging firm Whatsapp, with its fifty-five employees, was bought by Facebook in 2014 for twenty-two billion dollars. That's four hundred million dollars per employee. When I told my wife about it, she said, "Well, I hope they didn't lose their jobs when they got bought out." I hope not either. Maybe the janitor did, but you can rest assured that some of that twenty-two billion found its way into the pockets of at least a few of those people.
Leaving lottery-like occurrences aside, the point is that both software-based and manufacturing enterprises are finding ways to do what they need to do with fewer and fewer warm bodies who are not in the upper echelon of the cognitive non-routine class. The few people they still need—lawyers, managers, creative people, and other "symbolic manipulators," in George Gilder's phrase—may form the future ruling class of what software developer Martin Ford calls "techno-feudalism."
But even feudal lords needed their serfs to work their lands. The ruling class of the future will have no need for anyone not in their class, except as consumers. Most of the authorities Kolbert cites figure that the best we can do with the vast majority of us ordinary mortals who have no aptitude for programming, management, the law, or high finance, is to pension us off with guaranteed incomes, or something that amounts to that, and hope we don't decide to up and storm the castle some day.
Next week I plan to look at an alternate view of the same problem, written during the depths of the Great Depression, but I've run out of space today. In the meantime, if you have a job, be grateful for it, and share some of what you have with those less fortunate.
Sources: Elizabeth Kolbert's piece "Rage Against the Machine: Will Robots Take Your Job?" begins on p. 114 of the Dec. 19 & 26, 2016 issue of The New Yorker magazine.
Monday, December 12, 2016
On the morning of Saturday, July 30, 2016, a group of sixteen people gathered in a Wal-Mart parking lot in Central Texas before sunrise for what they hoped would be a thrilling and memorable experience. Several of them were married couples or newlyweds. Ross and Sandra Chalk were 60 and 55 but recently married, while John and Stacee Gore were both in their 20s and celebrating their third wedding anniversary that week. Others showed up as a result of a birthday present given by a loving friend or relative. All fifteen passengers were trusting balloon pilot Alfred Nichols to take them up in his hot-air balloon, give them a wonderful experience, and return them safely to earth. But two out of three wasn't going to be good enough.
As often happens on summer mornings in this part of Texas, low clouds drifted through the sky. But after a short delay, Nichols decided to fly anyway, and around 7 AM, shortly after sunrise, the balloon took off with fifteen passengers and the pilot.
Photos taken during the flight show patchy clouds and fog beneath the balloon. Evidently Nichols decided to land near Maxwell, Texas, about forty miles southeast of Austin. Utility-company records show that at 7:42 AM, something happened to trip a protective relay on a high-voltage transmission line crossing a cornfield. First responders soon discovered that the balloon became entangled in the transmission line, caught fire, and crashed, killing all sixteen people aboard, including Nichols. This was the worst balloon crash ever in the U. S., in terms of fatalities, and subsequent investigations have revealed some unsavory facts about Nichols and about the industry in general.
At a hearing held Friday, Dec. 9 in Washington, D. C., the National Transportation Safety Board (NTSB) presented documentation and evidence about the crash, which is still under investigation. Toxicology reports show that Nichols had seven different prescription drugs at detectible levels in his body. Prior to the crash, he had been convicted in Missouri of four charges of driving while intoxicated, and at the time of the crash was not allowed to drive a car in Texas. Nevertheless, he held a valid commercial balloon pilot certificate. Weather reports from the day of the crash show that the cloud ceiling had lowered to only 700 feet at the time of launch, and other balloon pilots present at the hearing agreed that they would not have flown under such conditions. Nichols appears to have been a disaster waiting to happen.
We may be seeing a pattern that is all too familiar: a new activity or business arises with no or minimal regulation, a tragedy results in headline-grabbing deaths, and only after the tragedy laws are amended to more properly regulate the activity or business. Although hot-air balloons were the first form of human flight to be invented back in the 1700s, balloon rides were so infrequent, and the number of people involved so small, that a light-handed regulatory environment seemed to have sufficed for decades. But this tragedy may mark the point at which regulations will catch up with the larger volume of customers taking rides in larger balloons that present a greater danger to more people than ever.
The Federal Aviation Administration (FAA), recognizing these dangers, has established regulations for commercial hot-air balloon pilots, and makes them undergo rigorous tests, both on paper and practical ones in a working balloon. But beyond that, pilots are largely left on their own to follow the elaborate advice in the 252-page Balloon Flying Handbook issued by the FAA. Most commercial balloon operations are small, like the one-man show that Nichols ran, and lack the natural supervision that working for even a small charter-plane company would entail. The solo nature of balloon flying, plus the fact that the same person piloting the balloon is probably the one who stands to profit the most if a full-capacity flight goes forward in hazardous conditions, means that there are built-in conflicts of interest in this type of flying that are not faced by pilots who work for major airlines, for example. For this reason alone, one would hope that regulatory oversight would be at least as rigorous as it is for commercial charter-flight pilots of fixed-wing aircraft, not less. As it is, however, there are not even any reliable statistics on how many flight hours are logged by commercial balloon pilots in the U. S., as some public-health experts researching the problem found in 2013.
Part of the problem is that the regulatory question is caught in a turf war between the NTSB, which investigates transportation accidents of all kinds, and the FAA, which issues flight safety regulations and requirements for both flight equipment and pilots. The NTSB has been pushing for tighter balloon-pilot regulations for years, but the FAA has so far refused to act, trusting to private balloon-pilot organizations to do self-enforcement. In Nichols' case, at least, this kind of enforcement failed.
It's all very well to publish books of regulations and advice, but if enforcement is left solely up to the person who also stands to profit personally if the rules are flouted, the FAA is guilty of putting too much trust in fallible human nature. Something along the lines of periodic background checks and even surprise drug tests should be implemented for commercial hot-air balloonists who take the lives of others into their hands. Commercial balloons can carry as many as 32 passengers, and newspaper reports have pointed out that many charter and common-carrier fixed-wing aircraft don't carry that many passengers. The bottom-line purpose of flight regulation is to protect the lives of passengers, and the FAA's creaky system for doing that for hot-air balloon riders crashed along with the sixteen people who lost their lives on that summer day.
Balloons tend to be associated in the public mind with fun, frivolity, and pleasant times. The balloon Nichols was piloting had a big smiley face with sunglasses painted on it. If people are going to continue to ride balloons for pleasure, we should make sure that they aren't putting their lives into the hands of someone who can't drive them to the takeoff point because of drunk-driving convictions. I hope the FAA and the NTSB can work out their differences to revise hot-air ballooning regulations and policies so that the tragic crash last summer is the last one of that magnitude for a long, long time.
Sources: I referred to reports of the NTSB hearing held Dec. 9, 2016 on the San Antonio Express-News website at http://www.mysanantonio.com/news/local/texas/article/NTSB-holds-hearing-on-balloon-crash-that-killed-10777463.php and KXAN-TV at http://kxan.com/2016/12/09/witnesses-recall-lockhart-hot-air-balloon-crash-that-killed-16/and http://kxan.com/2016/10/07/hot-air-balloon-regulations-unchanged-despite-deadly-crash/. The paper "Hot-Air Balloon Tours: Crash Epidemiology in the United States, 2000-2011" by S.-B. Ballard, L. P. Beaty, and S. P. Baker, was published in Aviation Space and Environmental Medicine in 2013 in vol. 84, pp. 1172-1177, and is available online at
The FAA's "Balloon Flying Handbook" is available as a download at https://www.faa.gov/regulations_policies/handbooks_manuals/aircraft/media/FAA-H-8083-11.pdf.
Monday, December 05, 2016
The idea of a "public utility" is firmly entrenched in the minds of most people who live in industrialized countries today. Things like the water supply, electric power, and more recent developments such as Internet service are all considered well-nigh essential to modern life. Most people would probably agree that because of this, governments have the right to regulate public utilities in a way that would be regarded as heavy-handed or illegal if the firm involved was making dental floss, for example, instead of providing a necessity like clean water or electric power. But I, for one, never stopped to wonder where the phrase came from until I read a historical article by Adam Plaiss called "From Natural Monopoly to Public Utility."
Plaiss traces the origin of the phrase all the way back to philosopher John Stuart Mill, who used it in a different sense, as a modifier rather than a noun. Mill referred to canals and bazaars as works useful to the general public—that is, works of "public utility." But the concept that a system of waterworks or communications could be called a public utility dates back only to the late 1800s, when the related concept of a natural monopoly began to influence thinkers during what came to be called the Progressive Era.
Progressives enthused about applying relatively new social sciences such as economics to pressing public problems such as the exploitation of the working classes by private monopolistic companies. One of the first professionally-trained economists in the U. S. was Richard T. Ely, who obtained his doctorate from Germany and came back to join the effort to apply scientific approaches to economics as a way of "bring[ing] about a better world." And during a period in the U. S. when utility companies selling gas, water, electricity, and telephone service were rapidly expanding, Ely examined the question of a natural monopoly. Was there such a thing, and if so, what were its characteristics?
Around 1888, Ely came up with a set of criteria that made an entity a natural monopoly. The thing it supplied had to be a necessity, like water. The area it served had to be geographically distinct. And there could be no wasteful duplication of service within the area. A classic example of what Ely called a natural monopoly was a water-supply company. The heavy expense of laying pipes and distribution networks made it virtually impossible for there to be meaningful competition between two rival water-supply companies for the same customers. So if a service met Ely's criteria for being a natural monopoly, Ely believed it was the public's right to regulate that service closely.
Perceptive and thoughtful as Ely was, Plaiss points out that he had a blind spot when it came to the root cause of a natural monopoly. Ely attributed the cause to the nature of the hardware infrastructure itself. But the idea that only private capital could afford to build utility services was so universally accepted at the time, that Ely failed to see the contribution of the economic background, so to speak, of late 1800s America, to the existence of natural monopolies. It is only a slight exaggeration to say that Ely believed technology caused natural monopolies, not people.
And because Ely saw the creation of natural monopolies as "technologically determined," as historians put it, he felt it was necessary for all owners of such monopolies to be subject to government regulation. Otherwise, horrors such as Plaiss cites in his paper might come about, and did in fact happen in the 1880s and 1890s. For example, privately-owned water companies in cities such as Houston and Seattle refused to extend their networks to newer parts of the cities, hampering fire departments which had no water hydrants to connect to in case of fire. And a typhoid-fever outbreak in Superior, Wisconsin was caused by impure water provided by a private water company. Thus, Ely believed that effective governmental control, if not outright ownership, of natural monopolies was necessary to prevent the exploitation of the masses that would result from unregulated private ownership.
After Ely published his thoughts along these lines, a Progressive journalist named Henry Call first used the phrase "public utility" as a noun in 1895, meaning by it any organization that enjoys what Ely would call a natural monopoly in the delivering of what was considered a modern necessity. Call widened this category to include "banks, railroads, telegraphs," and municipal services such as water and gas. In the coming years, as cities and states established regulatory commissions and agencies for such utilities, the public got used to the idea that certain types of business could be categorized as a public utility, and therefore subjected to regulation. Many states passed regulatory laws for public utilities in the twenty years or so after 1900, which saw the height of the Progressive Era. And although the free-market trends of the 1920s put a damper on further attempts at regulation, the distress of the Great Depression renewed public enthusiasm for government controls on all sorts of businesses that looked like public utilities. The establishment of the Federal Communications Commission in 1933 was square in the tradition of regulating public utilities such as the air waves, for example.
Since the Progressive Era, the scales of regulation have swung back and forth. As late as the 1970s, airlines, the telephone system, and electric utilities in the U. S. were all closely-regulated and rather dull businesses, guaranteed an annual profit by their regulatory agencies, but not encouraged to do anything rash or speculative. By and large, this situation produced stability and profitability, but discouraged technological innovation. The spate of deregulation that began in the 1980s and continues largely to this day contributed to an explosion of new communications technologies—cable TV, mobile phones, and the Internet, to mention only a few—but has arguably had its downsides, as many smaller cities lost air service altogether and the deregulated electric-power market was gamed by near-criminal enterprises such as Enron.
With at least the hope of some fresh winds blowing through Washington these days, we may see a swing of the regulatory pendulum back toward tighter controls in some services, or looser ones, depending on whether the interests of the supposedly downtrodden public or of the wealthy owners of public utilities win out.
But whatever happens, we will do well to remember that the idea of a public utility is only about 130 years old, and its definition has twisted and turned with the political winds of the times in which it was used.
Sources: "From natural monopoly to public utility: technological determinism and the political economy of infrastructure in progressive-era America," by Adam Plaiss, appeared in the Society for the History of Technology journal Technology and Culture (Oct. 2016, vol. 57, no. 4, pp. 806-830).
Monday, November 28, 2016
Many generations of technology ago—that is to say, in the 1950s—there was a popular TV show called "Father Knows Best," starring Robert Young as the father of four children whose escapades and misfortunes always wound up with the kids having a talk with Daddy. When this happened, you knew the final commercial break was coming up and everything would be tied up neatly in a few more minutes.
Real family life in the 1950s wasn't as easy to fix as "Father Knows Best" portrayed, and neither is the problem of drivers getting distracted by portable devices such as mobile phones, tablets, and so on. Some observers are attributing the recent rise in per-mile auto fatalities in the U. S. mainly to electronic distractions, and the U. S. National Highway Transportation Safety Administration (NHTSA) has a Department of Transportation (DOT) that has recently issued a draft set of "guidelines" for makers of electronic devices and automotive manufacturers to follow in order to address this problem.
Everybody admits there's a real problem. If you've driven more than a few hours in rush-hour traffic in any major city, you've probably seen people doing things at the wheel that you can't believe they're doing, like texting or studying something on the car seat, even watching videos. The question is what to do about it.
Lots of municipalities have tried to attack the problem by passing a no-hand-held-device-use ordinance for drivers, but enforcing such a thing is not something that highway patrol officers get real excited about, and the consensus is that these ordinances have not made a big dent in the problem.
So on Nov. 23, the NHTSA announced a draft of guidelines for makers of portable devices: mobile phones, tablets, GPS display systems, you name it. Two of the new concepts that these guidelines, if followed, would introduce to the driving public are "pairing" and "Driver Mode."
Pairing refers to an electronic connection between the portable device and the vehicle's built-in displays and controls. Historically, the automakers have taken the NHTSA's word seriously regarding its recommendations for how to incorporate safety features in cars. Although guidelines do not have the force of law, they can become law if Congress so chooses, and so many safety features such as seat belts and air bags showed up in cars as options before they were made mandatory. In an earlier set of guidelines, the NHTSA set up rules for built-in instrumentation that would meet the agency's non-distraction requirements. This involves things like not requiring the driver to glance away from the road for more than two seconds at a time and so on. Their reference maximum distraction is tuning a radio manually. Anything that distracts you more than that is basically regarded as too much.
Assuming the car's built-in controls and displays meet that criterion, pairing basically ports the portable device's controls to the car's built-in controls, which automatically meet the distraction guidelines already. Maybe this sounds easy to a regulatory agency, but to this engineer, it sounds like a compatibility nightmare. For pairing to work most of the time, every portable device that anyone is likely to use in a car will have to be able to communicate seamlessly with the wide variety of in-car systems, and be able to use those systems as a remote command and control point instead of the device's own controls and displays. Maybe it can be made to work, but at this time it looks like a long shot. And even if it does, you have the problem of those die-hards (such as yours truly) who cling to cars that are ten or fifteen years old and will never catch up to the latest technology. (Those folks tend not to buy the latest portable devices either, but there are exceptions.)
Recognizing that pairing won't solve all the problems, the next step is Driver Mode. This is an operational mode that goes into effect when the device figures out it's in a moving car. Most new portable gizmos these days have built-in GPS systems, and so they can detect vehicle motion without much of a problem, although there might be issues with things like rides on a ferry boat and so on. But those situations are rare enough to be negligible. Once in Driver Mode, the device will refuse to let the user do things like texting, watching videos, and other activities that distract more than the reference tuning-the-radio operation would.
One can foresee problems with Driver Mode as well. The NHTSA says the user should be able to switch it off, and if this option is available, my guess is a lot of people will choose to disable Driver Mode altogether. A determined distracted driver is going to find a way to text while driving no matter what, but the hope is that with these new measures in place—pairing and Driver Mode, mainly—the number of incidents of distracted driving will decrease, and we will resume our march to fewer traffic accidents that has been going on historically for the last several decades.
While the NHTSA deserves credit for encouraging device makers and car manufacturers to consider these ideas, it is not clear that there is a lot of enthusiasm for them, especially on the part of the mobile phone makers. Automakers selling big-ticket cars can more easily adapt their products to the different requirements of different legal regimes in the U. S. and, say, France. But piling a bunch of complicated pairing features onto phones sold only in the U. S. may not be an easy thing to convince phone makers to do. Unless the U. S. initiative proves so popular that it becomes a global phenomenon, my guess is that mobile phone makers will resist building in the pairing function, especially because they would have to deal with a bewildering variety of host controls and displays in cars that would be hard to keep up with.
This issue is just one aspect of the huge upheaval in the auto industry that IT is causing right now. Integrating cars with the Internet and portable devices, and making sure in-car displays work without causing wrecks, are only two of the many challenges that car makers face in this area. Ironically, the move toward driverless cars, if successful, would render all the driver-distraction precautions pointless anyway. If the driver's not doing anything, it's fine to let him or her be distracted. That's Google's hope, anyway, in developing driverless cars: less time paying attention to driving means more time on the Internet.
The hope is that all the confusion will eventually settle down, or at least we will make the transitions to highly IT-intensive cars that are still at least as safe to drive as the older ones, if not safer—until we don't have to drive them at all. But it looks like right now, at least, car makers will have to aim simultaneously at two targets that are moving in opposite directions.