Monday, December 31, 2007

Threats, Rumors, the Internet, and Banks

Well, it's finally happened. I am in possession of some information which may be completely unreliable, but on the other hand is not public knowledge. And it has something to do with engineering ethics, broadly defined. (That's the only way it's defined in this blog—broadly.)

Here it is: About six weeks ago, a U. S. Congressperson went around telling a few of her friends to get as much money out of the bank as they could, since the credit and banking computer systems were under a significant terrorist threat. One of the people the Congressperson told, told my sister, and yesterday my sister told me. (That's pretty stale news for an Internet blog, I realize, but hey, I use what I can get.) It's quite possible that the threat, if it ever existed, has disappeared by now. But it did stimulate me to ask the question, "What are the chances that a concerted terrorist attack on the credit and banking computer systems would succeed in shutting down the U. S. economy?"

So far, in the very limited research I've done, I can't find anybody who has addressed that question recently in so many words. But I turned up a few things I didn't know about, and so I'll share them with you.

The vast majority of cybercrimes committed in this country result not in nationwide crises, but in thousands or millions of consumers losing sums varying from a few cents to thousands of dollars or more. False and deceptive websites using the technique known as "phishing" capture much of this ill-gotten gain. These can range from quasi-legal sites that simply sell something online that's available elsewhere for free if you just looked a little harder (I fell for this one once), down to sophisticated sites that imitate legitimate organizations such as banks and credit card companies with the intention of snagging an unsuspecting consumer's credit information and cleaning out their electronic wallets. While these activities are annoying (or worse if you happen to be a victim of identity theft and get your credit rating loused up through no fault of your own), they in themselves do not pose a threat to the security of the U. S. economy as a whole.

What we're talking about is the cybercrime equivalent of a 9/11: a situation in which nobody (or almost nobody) could complete financial transactions using the Internet. Since a huge fraction of the daily economic activity of the nation now involves computer networks in some way or other, that would indeed be a serious problem if it went on for longer than a day or two.

The consequences of such an attack can be judged by what happened after the real 9/11 in 2001, when the entire aviation infrastructure was closed down for a few days. The real economic damage came not so much from that "airline holiday" (although it hurt) as from the reluctance to fly that millions of people felt for months afterward. This landed the airline industry in a slump from which it is only now recovering.

A little thought will show that a complete terrorist-caused shutdown isn't necessary to produce the desired effect (or undesired, depending on your point of view), even if it were possible, which it may not be, given the distributed and robust nature of the Internet. Say some small but significant fraction—even as little as 1% to 3%—of online financial transactions began going completely astray. I try to buy an MPEG file online for 99 cents, and I end up getting a bill for $403.94 for some industrial chemical I never heard of. Or stuff simply disappears and nobody has a record of it, and no way of telling if it got there. That is the essence of terrorism: do a very small and low-budget thing that does some spectacular damage and scares everybody into changing their behavior in a pernicious way. If such minor problems led only ten percent of the public to quit buying things, you'd have an instant recession.

Enough of this devil's advocacy. Now for the good news. There is an outfit called the Financial Services Information Sharing and Analysis Center (FSISAC). It was founded back in 1999 to provide the nation's banking, credit, and other financial services organizations with a place to share computer security information. Although it has run across some roadblocks—in 2002, one Ty Sagalow testified before the Senate about how FSISAC needed some exemptions from the Freedom of Information Act and antitrust laws in order to do its job better—the mere fact that seven years after 9/11, we have not suffered a cyberterrorist equivalent of the World Trade Center attacks says that somebody must be doing something right.

You may have seen the three-letter abbreviation "SSL" on some websites or financial transactions you have done online. That stands for "Secure Socket Layer" and if you've been even more sharp-eyed and seen a "VeriSign" logo, that means the transaction was safeguarded by FSISAC's service provider, VeriSign, out of Mountain View, California. I'm sure they employ many software engineers and other specialists to keep ahead of those who would crack the security codes that protect internet financial transactions, and it's not an easy job. But as bad as identity theft or phishing is these days, it would be much worse without the work of VeriSign and other similar organizations.

If the truth be told, much cybercrime is made easier by the stupid things some consumers do, such as giving out their credit card numbers and passwords and social security numbers to "phishy-"looking websites, or in response to emails purporting to be from your bank or credit card company. Any financial organization worth its salt guards passwords and such things as gold, and never has to stoop to the expedient of emailing its customers to say, "Oh, please remind us of your password again, we lost it." But as P. T. Barnum is alleged to have said, no one has ever gone broke underestimating the intelligence of the American public. Or maybe it was taste, not intelligence. Anyway, don't fall for such stunts.

The FSISAC has a handy pair of threat level monitors on their main website, with colors that run from green to blue, yellow, orange, and red. As of today, the general risk of cyber attacks is blue ("guarded") and the significant risk of physical terrorist attacks is yellow ("elevated"). I'm not sure what you're supposed to do with that information, but you might sleep better tonight after the New Year's Eve celebration knowing that your online money and credit are—reasonably—safe. Happy New Year!

Sources: The FSISAC website showing the threat-level displays is at http://www.fsisac.com/. VeriSign's main website is http://www.verisign.com/. Mr. Sagalow's testimony before the U. S. Senate in May of 2002 is reproduced at http://www.senate.gov/~govt-aff/050802sagalow.htm.

Wednesday, December 26, 2007

Let There Be (Efficient) Light

Like many of us, the U. S. Congress often puts off things till the last minute. Last week, just before breaking for the Christmas recess, our elected representatives passed an energy bill. Unlike earlier toothless bills, this one will grow some teeth if we wait long enough and don't let another Congress pull them first. Besides an increase in the CAFE auto-mileage standards, the bill will make it illegal by 2012 to sell light bulbs that don't meet a certain efficiency standard. And most of today's incandescents can't meet the mark.

Now what has this got to do with engineering ethics? You could argue that there's no ethical dilemmas or problems here. You could say it's legal, and therefore ethical, to design, make, and sell cheap, inefficient light bulbs right up to the last day before the 2012 deadline, and thereafter it will be illegal, and then unethical, to do so. No ambiguities, no moral dilemmas, cut and dried, end of story. But simply stating the problem in that way shows how there has to be more thought put into it than that.

For example, systems of production and distribution don't typically turn on a dime. One reason the legislators put off the deadline five years into the future is to give manufacturers and their engineers plenty of time to plan for it. And planning, as anyone who has done even simple engineering knows, is not always a straightforward process. To the extent that research into new technologies will be required, planning can be highly unpredictable, and engineers will have to exercise considerable judgment in order to get from here to there in time with a product that works and won't cost too much to sell. That kind of thing is the bread and butter of engineering, but in this case it's accelerated and directed by a legal mandate. And I haven't even touched the issue of whether such mandates are a good thing, even if they encourage companies to make energy-efficient products.

In the New York Times article that highlighted this law, a spokesman for General Electric (whose origins can be traced directly back to incandescent light-bulb inventor Thomas Edison) was quoted as claiming that his company is working on an incandescent bulb that will meet the new standards. Maybe so. There are fundamental physical limitations of that technology which will make it hard for any kind of incandescent to compete with the compact fluorescent units, let alone advanced light-emitting diode (LED) light sources that may be developed shortly. But fortunately, Congress didn't tell companies how to meet the standard—it just set the standard and is letting the free market and its engineers figure out how to get there.

I have not seen the details of the new law, but I assume there are exemptions for situations where incandescents will still be needed. For example, in the theater and movie industries, there is a huge investment in lighting equipment that uses incandescents which would be difficult or impossible to adapt to fluorescent units for technical reasons. It turns out that the sun emits light that is very close to what certain kinds of incandescent bulbs emit, and for accurate color rendition the broad spectrum of an incandescent light is needed. And I have a feeling—just a feeling—that, like candles, incandescent light bulbs will be preserved in special cultural settings: displays of antique lighting and period stage sets, perhaps. Surely there will be a way to deal with that without resorting to the light-bulb equivalent of a black market.

But most of these problems are technical challenges that can be solved by technical solutions. One of the biggest concerns I have is an esthetic one: the relative coldness of fluorescent or LED light compared to incandescent light. This is a matter of the spectral balance of intensity in different wavelengths. For reasons having to do with phosphor efficiencies and the difficulty of making red phosphors, it's still hard to find a fluorescent light that has the warm reddish-yellow glow of a plain old-fashioned light bulb, which in turn recalls the even dimmer and yellower gleam of the kerosene lantern or candle. Manufacturers may solve this problem if there seems to be enough of a demand for a warm-toned light source, but most people probably don't care. For all the importance light has to our lives, we Americans are surprisingly uncritical and accepting of a wide range of light quality, from the harsh glare of mercury and sodium lamps to the inefficient but friendly glow of the cheap 60-watt bulb. I'm not particularly looking forward to getting rid of the incandescent bulbs in my office that I installed specially as a kind of protest against the harsh fluorescent glare of the standard-issue tubes in the ceiling. But when it gets to the point when I have to do it, I hope I can buy some fluorescent replacements that mimic that warm-toned glow, even if I know the technology isn't the same.

Sources: The New York Times article describing the light-bulb portion of the energy bill and its consequences can be found at http://www.nytimes.com/2007/12/22/business/22light.html. A February 2007 news item describing General Electric's announcement of high-efficiency incandescent lamp technology (though not giving any technical details) is at http://www.greenbiz.com/news/news_third.cfm?NewsID=34635.

Monday, December 17, 2007

Lead in the Christmas Tree Lights—When Caution Becomes Paranoia

Who would have thought? Lurking there amid the gaily colored balls, the fresh-smelling piney-woods aroma of the Douglas fir, and the brilliant sparks of light twinkling throughout, is the silent enemy: lead. Or at least, something like that must have been going through the reader who wrote in to the Austin American-Statesman after she read a caution tag on her string of Christmas-tree lights. According to her, it said "Handling these lights will expose you to lead, which has been shown to cause birth defects." Panicked, she rushed back to the store where she bought them to see if she could find some lead-free ones, but "ALL of them had labels stating that they were coated in lead! This is terrifying news for a young woman who is planning to start a family!"

The guy who writes the advice column in which this tragic screed appeared said not to worry, but be sure and wash your hands after handling the lights. He based his advice on information from Douglas Borys, who directs something called the Central Texas Poison Center.

In responding to the woman's plight, Mr. Borys faced a problem that engineers have to deal with too: how to talk about risk in a way that is both technically accurate and understandable and usable by the general public. We have to negotiate a careful passage between the rock of purely accurate technological gibberish, and the hard place of telling people there's nothing to worry about at all.

In the case of lead, there is no doubt that enough lead in the system of a child, or the child's mother before it is born, can cause real harm. The question is, how much is "enough"?

Well, going to the technical extreme, the U. S. Centers for Disease Control and Prevention issued a report in 2005 supporting the existing "level of concern" that a child's blood not contain more than 10 micrograms of lead per deciliter (abbreviated as 10 mg/dL). No studies have shown consistent definitive harm to come to children with that low an amount of lead in their system. Just to give you an idea of how low this is, the typical adult in the U. S. has between 1 and 5 mg/dL of lead in their blood, according to a 1994 report. The concern about pregnant (or potentially pregnant) women getting lead in their system is that the fetus is abnormally sensitive to lead compared to older children and adults, although exactly how much isn't clear, since we obviously can't do controlled experiments on pregnant women to find out.

Now if you tried to print the preceding paragraph in a daily paper, or a blog for general consumption, or (perish the thought!) read it on the TV news, you'd probably get fired. Why? Because using phrases like "micrograms per deciliter" has the same effect on most U. S. audiences as a momentary lapse into Farsi. People don't understand it and tune you out. But unfortunately, if you want to talk about scientifically indisputable facts, you have to start with nuts-and-bolts things like how many atoms of lead do you find in a person and where did it come from? These are things that scientists can measure and quantify, but the general public cannot understand them, at least not without a lot of help. So it all has to be interpreted.

So to go to the other extreme of over-interpretation, the expert from the poison center could have said something like, "Aaahh, fuggedaboudit! Do you smoke? Does your house have old lead paint? Do you ever drive without seatbelts, or talk on your cell phone and drive at the same time? Are you significantly overweight? If any of these things is true, you're far more likely to die from one of them than from any possible harm that might come to you or your hypothetical children from handling Christmas-tree lights with a tiny bit of lead at each solder joint, covered up underneath insulation and probably not accessible to the consumer at all under any normal circumstance."

In saying these things, the expert would have been entirely correct, but probably would have come across as less than sympathetic, shall we say. A Time Magazine article back in November 2006 pointed out that because of the way our brains process information, we tend to overreact to certain kinds of hazards and ignore others that we'd be better off paying attention to. Unusual hazards and dangers that take a long time to show their insiduous effects worry us more than things we're used to or things that get us all at once (like heart attacks or car wrecks). The woman's worry fits both of these categories: the last thing she was thinking about as she decorated her Christmas tree was exposing herself to a poisoning hazard, and lead poisoning takes a while to show its effects.

As the expert's advice goes, I'd say he walked a reasonable line between the two extremes. Giving people something to do about a hazard (such as handwashing) always helps psychologically, even though as a matter of fact there wasn't any hazard in the first place. And blowing off the danger altogether is generally regarded as irresponsible, because one of the iron-clad rules of technical discourse is that nothing is entirely "safe."

Well, here's hoping that your thoughts of Christmas and the holiday season will be uncontaminated by worries about lead or any other poison—chemical, mental, or otherwise.

Sources: The column "Question Everything" by Peter Mongillo appeared in the Dec. 17, 2007 edition of the Austin American-Statesman. The online edition of Time Magazine for November 2006 carried the article "How Americans Are Living Dangerously" by Jeffrey Kluger at http://www.time.com/time/magazine/article/0,9171,1562978-1,00.html
. And the U. S. Centers for Disease Control and Prevention carries numerous technical articles on lead hazards and prevention, including a survey of blood lead levels at http://www.cdc.gov/mmwr/preview/mmwrhtml/mm5420a5.htm#tab1.

Monday, December 10, 2007

The Human Side of Automated Driving

The graphic attracted my eye. It showed a 1950s-mom type looking alarmed as she sat beside a futuristic robot driving an equally improbable-looking car. The headline? "In the Future, Smart People Will Let Cars Take Control." Which implies, of course, only dumb people won't. But I'm not sure that's what the author had in mind.

John Tierney wrote in last week's online edition of the New York Times that we are getting closer each year to the point where completely automated control of automobiles in realistic driving situations will become a reality, at least from the technological point of view. The Defense Advanced Research Projects Agency has been running a driverless-car Grand Prix for the last four years. In 2004, despite a relatively unobstructed route on Western salt flats, none of the vehicles got farther than seven miles before breaking down, crashing, or otherwise dropping out of the race. But this year, six cars finished a much more challenging sixty-mile course that included live traffic. Experts say that in five to fifteen years, using technologies ranging from millimeter-wave radar to GPS and artificial-intelligence decision systems, it will be both practical and safe to hand control of a properly equipped vehicle over to the equivalent of a robot driver for a good part of many auto trips. But will we?

There is that in humans which is glad for help, but rebels at a complete takeover. While we have been smoothly adapting to incremental automation of cars for decades, a complete takeover is a different matter. Almost nobody objected in the late 1920s to the introduction of what was then called the "self-starter" that replaced turning a crank in front of your car with turning an ignition key. (The only people who grumbled about it back then were men who liked the fact that most women were simply not strong enough to start a car the old-fashioned way, and therefore couldn't drive!) Automatic transmissions came next, and have taken over most non-U. S. markets except in places where drivers (again, men, mostly) take pride in shifting for themselves. Power steering, power brakes, anti-lock braking, and cruise control are all automatic systems that we have adopted almost without a quibble. But I think most people will at least stop to think before they press a button that relinquishes total control of the vehicle to a computer, or robot, or servomechanism, or whatever we'll choose to call it.

And well they might hesitate. Tierney notes that automatically piloted vehicles can follow much more closely in safety than cars being driven by humans. He cites a recent experiment in which engineers directed a group of driverless passenger cars to drive at 65 m. p. h. spaced just fifteen feet apart, with no untoward results. This has obvious positive implications for increasing the capacity of existing freeways. But he doesn't say if the interstate was cleared of all other traffic for this experiment. As for safety, automatic vehicle control doesn't have to be perfect—only better than what we have now, a system in which over 42,000 people died on U. S. roadways alone in 2006, the vast majority because of accidents due to human error rather than mechanical failures.

If we are going to go to totally automatic control for automobiles, it seems like there will have to be a systematic effort to organize the times, places, and conditions under which this kind of control can be used. You can bet that the fifteen-foot-spacing experiment would have failed spectacularly if even one of those cars were driven by a human. The great virtue of machine control is that it's much more predictable than humans, who can be distracted by anything from a stray wasp to a cell phone call and do anything or nothing as a consequence. One expert imagines that we will have special total-control lanes on freeways much like high-occupancy-vehicle lanes today, and no manually controlled vehicles will be allowed inside such lanes.

That's one way to do it, certainly. But I for one look forward to the day when we have door-to-door robot chauffeurs. I would like nothing better than to get in my car, program in my destination, and then sit back and read or work or listen to music or enjoy the scenery, or in fact any of the other things I can do right now on a train ride, which is at present practical transportation in the U. S. only in the northeast corridor. For decades we have fussed about the urban sprawl caused by the automobile and how much better things are handled (according to some) when public transportation is used instead of cars. It may be that automatic vehicle control will provide some kind of third way that will alleviate at least some of the problems caused by the automobile. If we can let go of the control thing, maybe we can do something similar with the ownership thing too, although as long as people want to work in cities and live in the country, you're going to have to find some way to get millions of bodies into the city in the morning and back to the country in the evening. But if we could space vehicles safely only fifteen feet apart and let them go sixty or eighty m. p. h. on the freeways, and come up with some software that would deal with traffic jams and other unpredictable but inevitable problems, commuting might become both safer, more fuel-efficient, and more pleasant.

Before many more of these futuristic visions happen, though, we are going to have to change some of our attitudes. There are sure to be a few drive-it-myself-or-nothing folks who will say that we'll have to pry their cold, dead fingers off the steering wheel before we can get them to agree to use totally automated driving. And if the thing isn't handled well politically, such a minority could spoil a potentially good thing for the rest of us. The right to drive your own car with your own hands on the steering wheel is one of those assumed rights that we accept almost without thinking about it, but if the day comes when it is more of a hazard than a public good, we may have to think about it twice—and then give it up.

Sources: The New York Times online article referred to appeared on Dec. 4, 2007 at http://www.nytimes.com/2007/12/04/science/04tier.html. Tierney refers to a University of California Transportation Center article by Steven Shladover published in the Spring 2000 edition of the center's Access Magazine (http://www.uctc.net/access/access16lite.pdf).

Monday, December 03, 2007

Can Robots Be Ethical? Continued

Last week I considered the proposal of sci-fi writer Robert Sawyer, who wants to recognize robot rights and responsibilities as moral agents. He looks forward to the day when "biological and artificial beings can share this world as equals." I said that this week I would take up the distinction between good and necessary laws regulating the development of use of robots as robots, and the unnecessary and pernicious idea of treating robots as autonomous moral agents. To do that, I'd like to look at what Sawyer means by "equal."

I think the sense in which he uses that word is the same sense that is used in the Declaration of Independence, which says that "all men are created equal." That was regarded by the writers of the Declaration as a self-evident truth, that is, one so plainly true that it needed no supporting evidence. It is equally plain and obvious that "equal" does not mean "identical." Then as now, people are born with different physical and mental endowments, and so what the Founders meant by "equal" must mean something else other than "identical in every respect."

What I believe they meant is that, as human beings created by God, all people deserve to receive equal treatment in certain broad respects, such as the rights to life, liberty, and the pursuit of happiness. That is probably what Sawyer means by equal too. Although the origin and nature of robots will always be very different than those of human beings, he urges us to treat robots as equals under law.

I suspect Sawyer wants us to view this question in the light of what might seem to be its great historical parallel, that is, slavery. Under that institution, some human beings treated other human beings as though they were machines: buying and selling them and taking the fruits of their labor without just compensation. The deep wrong in all this is that slaves are human beings too, and it took hundreds of years for Western societies to accept that fact and act on it. But acting on it required a solid conviction that there was something special and distinct about human beings, something that the abolition of slavery acknowledged.

Robots are not human beings. Nothing that can ever happen will change that fact—no advances in technology, no degradation in the perception of what is human or what is machine, nothing. It is an objective fact, a self-evident truth. But just as human society took a great step forward in admitting that slaves were people and not machines, we have the freedom to take a great step backward by deluding ourselves that people are just machines. Following Sawyer's ideas would take us down that path. Why?

Already, it is a commonly understood assumption among many educated and professional classes (but rarely stated in so many words) that there is no essential difference between humans and machines. There are differences of degree—the human mind, for example, is superior to computers in some ways but inferior in other ways. But according to this view, humans are just physical systems following the laws of physics exactly like machines do, and if we could ever build a machine with the software and hardware that could simulate human life, then we would have created a human being, not just a simulation.

What Sawyer is asking us to do is to acknowledge that point of view explicitly. Just as the recognition of the humanity of slaves led to the abolition of slavery, the recognition of the machine nature of humanity will lead to the equality of robots and human beings. But look who moved this time. In the first case, we raised the slaves up to the level of fully privileged human beings. But in the second, we propose to lower mankind to the level of just another machine. There is no other alternative, because admitting machines to the rights and responsibilities of humans implicitly acknowledges that humans have no special characteristic that distinguishes them from machines.

Would you like to be treated like a machine? Even a machine with "human" rights? Of course not. Well, then, how would you like to work for a machine? Or owe money to a machine? Or be arrested, tried, and convicted by a machine? Or be ruled by a machine? If we give machines the same rights as humans, all these things not only may, but must come true. Otherwise we have not fully given robots the same rights and responsibilities as humans.

There is a reason that most science fiction dealing with robots portrays the dark side of what might happen if robots managed to escape the full control of humans (or even if they don't). All good fiction is moral, and the engine that drives robot-dominated dystopias is the horror we feel at witnessing the commission of wrongs on a massive scale. Add to that horror the irony that these stories always begin when humans try to achieve something good with robots (even if it is a selfish good), and you have the makings of great, or at least entertaining, stories. But we want them to stay that way—just stories, not reality.

Artists often serve as prophets in a culture, not in any necessarily mystical sense, but in the sense that they can imagine the future outcomes of trends that the rest of us less sensitive folk perceive only dimly, if at all. We should heed the warnings of a succession of science fiction writers from Isaac Asimov to Arthur C. Clarke and onward, that there is great danger in granting too much autonomy, privileges, and yes, equality, to robots. In common with desires of all kinds, robots make good slaves but bad masters. As progress in robotic technology continues, a good body of law regulating the design and use of robots will be needed. But of supreme importance is the philosophy upon which this body of law is erected. If at the start we acknowledge that robots are in principle just advanced cybernetic control systems, essentially no different than a thermostat in your house or the cruise control on your car, then the safety and convenience of human beings will come first in this body of law, and we can employ increasingly sophisticated robots in the future without fear. But if the laws are built upon the wrong foundation—namely, a theoretical idea that robots and humans are the same kind of entity—then we can look forward to the day that some of the worst of science fiction's robotic dystopias will happen for real.

Sources: Besides last week's blog on this topic, I have written an essay ("Sociable Machines?") on the philosophical basis of the distinction between humans and machines, which I will provide upon request to my email address (available at the Texas State "people search" function on the Texas State University website www.txstate.edu).

Monday, November 26, 2007

Can Robots Be Ethical?

Earlier this month, Canadian science-fiction writer Bob Sawyer attracted a lot of attention with an editorial he wrote for a special robotics issue of the prestigious research journal Science. In his piece, Sawyer showed that writers of science fiction have been exploring the relationship between humans and robots at least since the early stories of Isaac Asimov in the 1940s. But far from coming up with a tidy solution to the moral implications of autonomous, seemingly intelligent machines, the sci-fi crowd appears to have concentrated on the dismal downsides of what could go wrong with robots despite the best intentions of humans to make them safe and obedient. Think Frankenstein, only with Energizer-Bunny endurance and superhuman powers.

Nevertheless, Sawyer is an optimist. He applauds the efforts of South Korea, Japan, and the European Robotics Research Network to develop guidelines for the ethical aspects of robot use, and chides the U. S. for lagging in this area. He uses phrases like "robot responsibilities and rights" and speculates that the main reason this country hasn't developed robot ethics is that many robots or robot-like machines are used by the military. He wants us specifically to explore the question of whether "biological and artificial beings can share this world as equals." He winds up with the hope that we might all aspire to the outcome of a 1938 story in which a man married a robot. That is, he looks forward to the time that all of us, like the lovers in countless fairy tales, can be "living, as one day all humans and robots might, happily ever after."

Well. Hope is a specifically human virtue, and is not to be thoughtlessly disparaged. But Sawyer has erred in blurring some vitally important distinctions that often get overlooked in discussions about the present and future role of robots in society.

I do not know anything about Sawyer's core beliefs and philosophy other than what he said in his editorial. But I hope he writes his fiction more carefully than he writes editorials.

The key question is whether a machine described by that term is under the control of a human being, and to what extent that control is exercised. He begins his editorial with the story of how a remotely piloted vehicle dropped a bomb on two people who looked like they were planting an explosive device in Iraq. He terms this vehicle, which was undoubtedly under the continuous control of a human operator, a "quasi-robot." No doubt it contains numerous servomechanisms to relieve the operator from tedious hand-controlled steering and stabilization duties, but to call a remotely controlled bomber a "quasi-robot" is to give it a degree of autonomy which it does not possess.

Autonomy is a relative term. There is no entirely autonomous (the word's roots mean "self-governing") being in the universe except God. The issue of autonomy is a red herring that distracts attention from the real question, which is this: is it even possible for a human-made machine to possess moral agency?

Now I've got to explain what moral agency is. We are used to the idea that children below a certain age are not allowed to enter into contracts, marry, smoke, or drink. Why is this? Because society has rendered a judgment that they are in general not mature enough to exercise independent (autonomous) moral judgment about these matters. They are not old enough to be regarded as moral agents in every respect under law. Of course, even young children seem to have some built-in ability to make moral judgments. Isn't "That's not fair!" one of the favorite phrases in the six-year-old set? We accord certain rights and responsibilities to humans as they mature because we recognize that they, and only they, can act as moral agents.

Sawyer's mistake (or one of them, anyway) is to assume that as progress in artificial intelligence and robotics progresses, robots will mature essentially like humans do and will be able to behave like moral agents. I would point out that this achievement is far from being demonstrated. But even if moral agency is simulated some day by a robot in a realistic way indistinguishable from humanity, this fact will always be true: machines have been, are, and always will be the products of the human mind. As such, the human mind or minds which create them also possess the ultimate moral responsibility for the robot's actions and behavior, no matter how seemingly autonomous the robot becomes. So the robot can have no "rights and responsibilities"—those are things which only moral agents, namely humans, can possess.

This fact is illustrated by one of Sawyer's own examples. He cites the case of a $10 million jury award to a man who was injured back in 1979 by a robot, probably an industrial machine. You can bet that in 1979, the robot in question was no autonomous R2D2—it was probably something like one of those advanced welders that you see in automotive ads, the ones that zip around making ten welds in the time it takes a human to make one. I merely note that the injured party did not sue the robot for $10 million—he sued the robot's operators and owners, because everybody agrees that if a machine causes injury, and one or more humans are responsible for the actions of the machine, that the humans are at fault and bear the moral responsibility for the machine's actions.

Another distinction Sawyer fails to make is the difference between good and necessary laws regulating the development and use of robots as robots, and the entirely pernicious and unnecessary idea of treating robots as autonomous moral agents. But as I'm out of space for today, I will take this question up next week.

Sources: Sawyer's editorial appeared in the Nov. 16, 2007 issue of Science, vol. 318, p. 1037. I addressed some issues related to the question of robot ethics in my blog "Are Robots Human? or, Are Humans Robots?" for July 30, 2007. Bob Sawyer's webpage is at http://sfwriter.com.

Monday, November 19, 2007

Yahoo Pays—A Little—for Internet Censorship in China

Shi Tao is still languishing in a Chinese prison. But now Yahoo, the company that helped put him there, has to pay something for what they did.

Until November of 2004, Shi was a journalist working for a Chinese business journal. Earlier that year, his newspaper received a message from the Chinese government warning the journal not to run stories on the 15th anniversary of the Tiananmen Square massacre of 1989. Shi emailed a copy of this message to an editor at Democracy News, a New York-based human-rights organization. Chinese government officials found out about the email and pressured Yahoo, Shi's internet service provider, to reveal the identity of the email's author. Yahoo did so, and on Nov. 24, 2004, agents of the government arrested Shi in the northeastern city of Taiyuan. He was convicted the following April of revealing "state secrets" and has been in jail ever since. In a similar case, Yahoo revealed the identity of engineer Wang Xiaoling, who had posted pro-democracy comments online about the same time. He suffered the same fate as Shi Tao, but Wang's wife Yu Ling decided not to take this lying down.

After years of delay trying to obtain court documents, Yu Ling filed suit in a California court against Yahoo last April. And last week, Yahoo announced that the suit had been settled out of court, though few details were released other than the fact that Yahoo executives promised they would do "everything they can to get the men out of prison." In a fight between a totalitarian sovereign government and the CEO of one U. S. company, I think it is fair to say the odds are stacked against the company—and any prisoners the company is trying to help.

A lot of engineering ethics involves shades of gray, ambivalent situations, and other complexities. That is not the case here. At stake is the question of whether freedom to criticize one's government is a good thing or not. The founders of the United States believed it was. It is a principle enshrined in the U. S. Constitution and defended to what some might view as an absurd degree today. If it is a good thing in one culture or state, it is a good thing everywhere. That freedom is just as valuable and worth protecting in Shanghai as it is in Peoria.

So what happens to your respect for this freedom if you run a large multinational company eager to profit from the giant potential market that is China? It appears that you agree to whatever compromises with freedom the communist government demands of you, up to and including the divulging of email account holder's identities. Now internet service providers in this country also divulge account names to law enforcement officials from time to time, but only after court orders with regard to what is likely to be truly criminal activity. Posting a blog saying you don't like George Bush will not get you sent to jail here. But as we have seen, doing something similar in China will get you sent to jail there, and Yahoo helped.

Only when one of the prisoner's relatives went to great personal trouble and expense to file a lawsuit against Yahoo did that company even start to act. In the past, and as recently as last week, it has justified its betrayal of Shi and Wang by saying if it doesn't follow the Chinese government's rules, Yahoo's own employees might be in danger. Well, duh! Better our customers go to jail than us. Is this the kind of attitude you want from a company that you do business with?

Rather than admit wrongdoing or even disclose what compensation is involved in the lawsuit's settlement, Yahoo convinced the dissidents to settle out of court for an undisclosed amount plus a promise to do whatever they can to gain the prisoners' release. If I were Shi or Wang, I would not hold my breath.

This kind of thing is what happens when a corporation allows profits to overwhelm its moral sense. The pressure on publicly owned corporations to make the most money while staying just within the bounds of the law is immense. And as someone whose retirement investments include corporate stocks and bonds, I am as much a part of that problem as anyone else who invests money in today's economy. But when that legal- economic principle is allowed to trump all others, you end up with situations in which settling lawsuits for doing heinous things is simply a matter of buying off those you have injured at the lowest negotiated price. That appears to be exactly what Yahoo has done.

If you are expecting me to pull any punches here, I'm not going to. Last week we invited a Chinese graduate student and her husband over for supper at our house. Both of the were born in the Peoples' Republic of China, but in different cities, and they met only last year when he was in his fifth year as a professor of mathematics and she was a new graduate student at Texas State University. They fell in love, got married, and now she is looking for a job. In this country there are no work committees to pass judgment on whether you can marry, what job you can take, where you can live, and so on. I did not discuss with them the reasons they emigrated to the U. S., but I think the answers are obvious.

Leaving one's native land is a terrible wrench, and these young people must have had very good reasons to abandon the land of their birth, learn a difficult foreign language, and excel academically in a strange environment. But it happens all the time. Wouldn't it be nice if stories like this could happen in China too? And some day they may, but only if the government decides to change its ways. And that will happen only when people like Shi Tao and Wang Xiaoling can make their voices heard without fear that a company based in the freedom-loving United States of America will rat on them and help send them to jail.

Sources: A report on the Yahoo settlement can be found in CNN's Asia online edition at
http://edition.cnn.com/2007/WORLD/asiapcf/11/13/yahoo.china/index.html. I first commented on this issue in a blog posted here on March 30, 2006.

Monday, November 12, 2007

Safety's Sleuths: The NTSB Investigation of the Minneapolis Bridge Collapse

Bridges are not supposed to fall down. But last August 1, the 1,900-foot-long bridge that carried I-35W over the Mississippi in Minneapolis came apart and landed in the river, carrying thirteen people to their highly unexpected deaths. We fear dying from a lot of things, but it is safe to say that nobody on that bridge that day spent a lot of time worrying about whether they would die as an interstate-highway bridge fell out from under them.

That very fact attests to the rarity, in this country anyway, of major structural failures in transportation-related public works such as bridges. One reason they are so rare is that for the last four decades, the National Transportation Safety Board has investigated accidents involving the nation's transportation infrastructure. In so doing, it performs a critical task that historian of technology Henry Petroski says is essential to the continued safety of engineered structures. In "Design Paradigms," a book of engineering case studies that includes three famous bridge failures, Petroski writes, "The surest way to obviate new failures is to understand fully the facts and circumstances surrounding failures that have already occurred." That is what the NTSB is doing now with regard to the Minneapolis bridge collapse.

The same day of the collapse, the NTSB dispatched a nineteen-member "Go Team" from its headquarters in Washington, D. C. to Minneapolis. As rescue and recovery work allowed, members of this team collected many kinds of information. They used an FBI-provided three-dimensional laser scanning device and gyro-stabilized high-resolution cameras to establish exactly where the parts of the bridge came to rest after the collapse. They collected names and recollections from dozens of eyewitnesses and secured the original of the famous security-camera recording that showed part of the bridge in the act of collapsing. By the third week of August, NTSB officials had interviewed over 300 people, including over a hundred who called in to a specially arranged witness hotline. In the following month or so, critical pieces of the bridge were removed to a nearby secure site for detailed inspection and investigation.

One of the most important tools currently available to the NTSB is powerful computer software called "finite-element analysis." This is a way to solve the fundamental materials-science equations that describe how steel (or any other solid) behaves under complicated conditions of stress. While it can't predict exactly where cracks will occur in an overstressed beam, it can reveal locations in a complex structure such as a bridge, where the local stresses exceed the tensile strength of steel. It is in such locations that cracks and failures are most likely to occur.

But as with any computer program, finite-element analysis software is only as good as the data you put into it. This is why the NTSB has spent the last three months gathering as much information as they can on not only the details of the bridge structure, including core samples showing how thick the deck was, but also other factors such as loading. You may recall that at the time of the collapse, a construction crew with heavy equipment was working on a portion of the bridge. The NTSB has concluded that a total of 287 tons of construction equipment and materials were on the bridge at the time of the accident. The exact location and weight of this extra loading is critical input to the computer analysis. The NTSB has made good progress in procuring such information by talking with eyewitnesses and viewing an aerial photograph taken by an airline passenger from a plane that passed over the bridge shortly before its collapse. Although the NTSB turned the disaster site over to the Minnesota Department of Transportation on Oct. 12, some thirty NTSB staffers are still working full-time on the investigation, which is not expected to be wrapped up for over a year.

Well-run operations are often taken for granted. Things could be very different. In places where there is nothing like the NTSB, disasters like this can be much more frequent, and citizens trying to affix blame have little if any recourse if something terrible happens to them or their loved ones. The NTSB could be corrupt, for example, or subject to bribes or falsification of its reports in response to political pressures. To my knowledge, however, its reputation for probity and "just-the-facts" scientific integrity is essentially spotless. This is no minor achievement, and the engineers who work for the Board have accomplished great things in the service of informing the both the technical public and the general public about the reasons for tragedies such as the Minneapolis bridge collapse.

Every major engineering failure marks the start of a detective story. Accident investigation is one of the few lines of work where engineers can spend their professional lives in the role of detectives. Now and then the culprit is a true criminal, but most of the time, accidents are due to inattention, bad communications, or inadvertent mistakes rather than any active will to do harm. Nevertheless, harm is done.

We will have to wait a while longer before we have the full story of how a part of I-35W suddenly lost altitude that hot August day. But it will be a story worth waiting for, because we can learn from it how to keep accidents like it from happening again.

Sources: The NTSB posts updates periodically on its accident investigations at its website. The latest such release about the Minneapolis bridge collapse was posted on Oct. 23, 2007 at http://www.ntsb.gov/pressrel/2007/071023c.html.

Monday, November 05, 2007

Identity Theft Gets Personal, or, Licenses to Steal?

Well, it's happened to me—sort of. My identity wasn't stolen, exactly—just left out in a place it didn't belong for a few days. When the Commonwealth of Massachusetts discovered its error, it tried to fix the damage and then it let me know all about it. And as far as I know, no harm has been done. Still, it leaves me with an uncomfortable feeling.

Here's what happened. Some years ago, I decided to become a licensed professional engineer. Unlike the medical and legal professions, the engineering professions generally don't require a practitioner to be licensed, except in a few cases where an engineer involved in public works such as bridges or roads has to sign off on plans for legal liability reasons. The vast majority of engineers working in private industry and academia in this country do not have to be licensed in order to hold their jobs. (The reasons for this are interesting, but a story for another day.)

Nevertheless, if you're licensed you get a pretty certificate to put on your wall, and some university engineering departments technically require their professors to be licensed professional engineers, although I've never heard of anybody losing their job over it. At the time, I was living in Massachusetts, and so I got online and found out what I had to do to become licensed.

The conventional route is a two-step process. Undergraduate engineering students can take an EIT (engineers-in-training) exam, and if they pass they become engineers in training. After five years or so of practice or the equivalent, they can take a second exam and become full-fledged licensed professional engineers. For older types like me, with a lot more than four years of experience, the Massachusetts Division of Professional Licensure had an alternative: I could put together about five pounds of documentation on my career and send it in and they'd interview me, and if they thought it was enough, they would license me after that. So that was the course I adopted, and in due course I received Electrical Engineering License No. 40940.

That number is part of the public record, which, as it turns out, the Division sends out regularly in the form of computer disks when it receives requests for lists of professional engineers of various types. This is how I get all kinds of junk mail from companies selling engineering-related products, I suppose, but I don't mind that aspect of the situation too much. What I mind a little more is what prompted the letter I received from the Division last week.

For four days last September, some disks they sent out in response to requests for licensees' names and addresses also accidentally included our Social Security numbers. That is NOT supposed to be a part of the public record, and commendably, the Division caught their mistake before too much damage had been done. They called all the places they'd sent the numbers to, got them to return the disks, made them sign papers saying they didn't retain any information from the disks, and so the incident is presumably closed. Just as a precaution, however, the Division told me to call one of the national credit reporting agencies and put a fraud alert on my credit report. I may get around to doing that one of these days.

As identity thefts go, this is a pretty minor case, more of a slipup than any deliberate crime. And I must say that the Division appears to have handled it in an exemplary fashion, notifying the potential victims and so on and getting the unintended recipients of the sensitive information to promise they didn't do anything fraudulent. But it gives one pause, because I have no idea who else has my Social Security number, and how careful they are being with it, and whether they've slipped up or had stuff stolen from them without even knowing about it.

This issue is shortly going to become even more important as most medical records go online in the next few years. I'm pretty sure one of the things you are always asked for in a doctor's office is your Social Security number, and that's how many medical records are indexed. Medical records have a lot of stuff in them that's even more sensitive than Social Security numbers, and I only hope that the doctors will learn from the bankers how to protect sensitive information.

The trouble is that the motivations are different. If a crook perpetrates credit-card fraud, the consumer is liable for only the first fifty dollars, and the bank or credit card company is left holding the bag for the rest. That one law has prompted the financial sector to develop one of the most secure and reliable systems of online information transfer in the world.

Doctors and healthcare providers don't have the same kind of motivation. A breach in your medical security is no skin off their nose, so to speak. So the laws will have to be written in a way that motivates the holders of sensitive information to protect it at the price of some penalty that will be greater than the cost of doing a good job of data security.

As for my little identity problem, I do believe I'll give one of those credit agencies a call. I had a very minor problem with one of them a few years ago and they fixed it with reasonable promptness, so it can't hurt to take that extra step of caution. That's an engineer for you.

Sources: More info on becoming a licensed professional engineer can be found at the website of the National Society of Professional Engineers, www.nspe.org.

Monday, October 29, 2007

Working the Bugs Out In Space

If you see metal shavings in the oil you change out of your car, that's not an encouraging sign. But what if your vehicle cost several billion dollars and is flying hundreds of miles above the ground at fifteen thousand miles an hour? That is the problem faced by the engineers and astronauts trying to build the International Space Station.

News reports this week say that space shuttle Discovery mission specialist Daniel Tani opened a plastic cover on a gearbox during a spacewalk to reposition some solar panels. He was following orders from ground engineers who had noted excessive vibration and power consumption from the motors that move the 30,000-pound solar panels so as to collect the maximum amount of sunlight. Inside the box, Tani found an abundance of metal shavings, and collected some for analysis back home.

Everything is harder in space: repairs, inspections, lubrication, and even engineering and design. Although there are a few expensive giant vacuum chambers around that let engineers test satellites and other small to medium-size objects in something close to the reality of space, these don't simulate zero-G conditions. So the only way to check out most space-bound systems in 100% realistic conditions is to fasten them on a rocket and send them out there to see what happens. This is one reason that space exploration is so expensive and fraught with failures.

Readers of this blog know that I have serious reservations about the continuing use of the Space Shuttle (it ought to be replaced yesterday, not in two or three years) and the wisdom of spending billions on a space station which is too shaky for really good science and too small for really meaningful colonization of space. All the same, it's good to know that when something goes wrong on a system as big as the Space Station, you can send up a guy to take off the covers and have a look around, even if the service call costs millions of dollars. Discovery's latest trip was not only for maintenance—it is part of a tightly scheduled program to keep the Space Station's construction on track for completion by 2010.

Since this effort is costing several countries (Russia, the U. S., Japan, and Canada are major partners) both money and lives (if you count those who died in the 2003 Columbia disaster), it is only reasonable to ask what good it is doing. There is a scientific answer, an engineering answer, and a political answer. As is the nature of these things, they all blur into each other.

The scientific answer is, so far, not much. I cannot think of a single major scientific discovery that has resulted from work performed directly by astronauts, as opposed to research enabled by the Hubble Space Telescope or other unmanned lunar and planetary probes. This of course may change once the station is "completed" (such a project is never really finished for good, but the bulk of work will eventually shift from construction to use). But right now, it's too early to say if there will be any significant scientific payoff from the project at all.

From an engineering standpoint, building and operating the space station can tell us loads about the problems of building and operating a space station. We've had a smoke problem, a computer problem, and now a ground-up-gear problem, possibly, and those are only the ones that made headlines. As the first system of its kind, the International Space Station is bound to have all kinds of engineering issues that we can learn a lot from, assuming we try to do something like this again. As every engineer knows, the first time is mainly learning from mistakes. If your funding goes long enough to let you try a second time, you have a chance at getting it mostly right.

From a political view, the space station is an experiment in international cooperation on an intensely complex technical project, and by and large, this aspect of it seems to have gone well. When the U. S. manned space program went on hold for two years after the Columbia disaster, the Russians stepped up to the plate and kept the station in business with Soyuz launches. So far, the politicians have mostly kept out of the way of the committed engineers and managers in all the countries involved who want to see this thing go. Engineers have a way of forgetting about nationalities or political differences when they share a common technical goal, and the International Space Station is a good example of how that can work.

In the meanwhile, there's the question of where all those metal shavings are coming from. The ten-foot boxes that serve as pivots for the large solar panels could be replaced, I suppose, but that would be a major undertaking. On the other hand, if the bearings freeze up that will severely limit the amount of electrical power available to the station. I hope this turns out to be something trivial, as one engineer on the ground hoped that the shavings were just chewed-up foil insulation. My instincts tell me that such a hope is wishful thinking, but we'll just have to wait and see.

Sources: The New York Times article describing the metal-shaving problem is at http://www.nytimes.com/2007/10/29/science/space/29shuttle.html. Wikipedia has a good articles on the International Space Station's history and construction.

Monday, October 22, 2007

One Laptop Per Child: Will It Fly?

Being poor and isolated is rotten. A recent book by Paul Collier entitled The Bottom Billion: Why the Poorest Countries Are Failing and What Can Be Done About It deals with the poorest one-sixth of the world's population of six billion. According to reviews, Collier identifies four main reasons that these poorest of the poor are where they are. Internal and regional conflicts (1) are sometimes worsened by concentrations of natural resources (2) such as gold and oil that distort economies, especially when (3) you live in a country next to one where similar problems are going on, and (4) your government is corrupted by sweetheart deals with everybody from Western multinational companies all the way down to international crooks. Although I haven't read the book, the problem of a country's poor children not having laptops apparently did not make Collier's list of the top four issues. Nevertheless, an organization in Cambridge, Massachusetts is busily working on solving that problem.

The outfit called "One Laptop Per Child" aims to put specially-designed, inexpensive laptop computers into the hands of millions of children in the poorest countries in the next few years. The machine itself will be powered by solar cells, hand crank, or batteries, and uses special hardware and software to reduce its operating power consumption to less than a watt under some conditions, which is about a tenth or less of what an ordinary laptop uses. Recent reports indicate that the designers have not yet reached their target cost of $100 per unit, but present estimates are below $200 and the hope is the cost will fall as manufacturing climbs the learning curve.

The project's founder is Nicholas Negroponte, who has held various positions at MIT and related organizations for many years. Negroponte, who also founded MIT's Media Lab, is a member of what one might term the MIT computer brain trust, a group of individuals including Seymour Papert and Marvin Minsky who have shaped the direction of a great deal of computer and artificial intelligence research and publicity.

Clearly, the hearts of Negroponte and company appear to be in the right place. Children don't live by bread alone, and it is a noble goal to bring the benefits of computer technology to people who are impoverished in other ways as well. The plan is to sell the laptops only to governments, which would presumably distribute the units to their citizens either free or at a heavily subsidized low cost. Although the XO-1, as it's called, will not be available for consumer purchases in general, the Wikipedia article on it reports that this Christmas, you will be able to "get one and give one": you can buy one for yourself and at the same time, donate one to a poor child somewhere.

There is a movement in engineering ethics to encourage the study of what are called "moral exemplars": people or organizations who do the right thing in engineering, furnishing good examples to the rest of us. I will say that the XO-1 project certainly has the potential to be a moral exemplar, but so far the jury is out. The organizers are still awaiting large-scale production and distribution, and until they have large numbers of units out in the field and do some studies to see how they are used, we will simply have to wait and see how the project turns out.

A few critics have pointed out that the venture is very "top-down," in the sense that a bunch of experts in Cambridge got together and designed a laptop that they thought would be good for third-world children to use. It has certainly gained Negroponte a lot of favorable media attention. For example, he introduced a kind of pre-prototype at a UN-sponsored meeting in Tunisia in 2005, sharing the platform with then-UN secretary general Kofi Annan. And judging by the specialized hardware and software, the MIT types have had a field day trying out some of their pet ideas in this thing, using it as kind of a test bed for a lot of what-if notions.

But whether the unit really meets a genuine need or truly improves the lives of children around the world remains to be seen. One concern is the fact that all the software on the unit is open-source. This is a nice gesture toward an ideal world that some people would like to live in, where all software would be open-source, but it ignores the reality that most software used by most computers today is proprietary. And if you can't run any proprietary software on these XO-1s (although users might install it after purchase, since the operating system is Linux), there is a real danger that the things may turn into just expensive toys.

Years ago, I experienced what happens when a new piece of computer hardware is launched without any software available for it. One of the leading lights in the Massachusetts computing world back then was the Digital Equipment Corporation, or DEC. I spent a good chunk of my first research dollars as a professor on a DEC computer highly recommended by a colleague who, I found later, used to work for DEC. It was a good machine hardware-wise, but as the months dragged on and nobody besides DEC developed any software for it, I found that I'd bought an expensive boat-anchor, and ended up having to buy a PC.

I hope such a fate does not await the XO-1, but surely the developers have thought of this problem in advance. Most of the world's effective software has been developed under the aegis of the free-enterprise system where people had to pay something for it. Maybe the children will surprise us and develop software on their own—the system is said to allow for this. I wish the XO-1 the best, but a community that benefits from computers is more than just the sum of software, hardware, training, and distribution. Time will tell, as it usually does.

Sources: The official One Laptop Per Child website (in English) is at http://laptop.org/en/. The Wikipedia article about it is at http://en.wikipedia.org/wiki/XO-1_(laptop). I learned about the project in an article by Kirk Ladendorf in the Oct. 22 issue of the Austin American-Statesman. Collier's book was reviewed in the November 2007 issue of First Things.

Monday, October 15, 2007

Copyright or Copywrong? The Ethics of Technological Multiplication

On Oct. 5, a jury in Minneapolis fined Jammie Thomas, a 30-year-old single mother, a total of $220,000 for downloading twenty-four copyrighted songs. Thomas was the target of a lawsuit filed by the Recording Industry Association of America (RIAA) and major music labels. Although music-downloading websites have been sued successfully in the past, this is one of the first times in recent months that an individual downloader has been fined.

Let’s leave aside, if we can, the picture this story gives us of six large, wealthy corporations, and a trade association representing many more, all ganging up on a woman who is not likely to be able to pay these fines any time soon. It can actually happen that a poor person does something wrong enough to be fined a lot of money for it, if not sent to jail. But is that what happened here?

Thomas’s case is just one tip of a huge iceberg that is floating around in electronic media today: the fact that making essentially flawless copies of a digital original requires less technical resources every week. Let’s try to clarify the issues a little bit.

Even back in the Stone Age, every tribe probably had some clowns and singers that other Cro-Magnons enjoyed listening to. These prehistoric entertainers created something of value: an economic good. Elementary justice demands that the entertainers who spend time and effort practicing and performing should receive some kind of reward for their effort. In those days, it might have been an extra joint of meat from the stewpot. Whatever the reward, the performer may have insisted on it before performing. The more people his performance attracted, the more stewpots he could sample from, but before the Internet, radio, printing, or writing, his ultimate market was pretty small.

Since the invention of writing itself (probably the oldest communications technology), the reproduction of economically desirable artifacts (stories, jokes, songs, etc.) has had a technological component. But even way back at the prehistoric origins of entertainment, there were two extremes that everyone involved had to navigate between. At one extreme, the performer has an absolute monopoly: he is the only performer in the world, everybody wants to see him perform or die, and so he can charge whatever he wants. He can demand the entire wealth of the whole tribe in exchange for one performance if he wishes. This is clearly unfair to the rest of the folks, who have themselves acted unwisely in becoming such slaves to amusement.

At the other extreme, the performer himself becomes a slave: he is threatened with death if he doesn’t perform, but he gets no rewards if he does. Anybody who wants to can walk up to him and demand a performance any time, with no charge to the members of the audience. This extreme is clearly unfair to the performer, who would be better off waking up dead some day.

You’ve been waiting for the technology to come in, right? This is an engineering ethics blog, after all. Well, here it is. All that technology can do is to multiply the performer’s performance in number, magnitude, impressiveness, duration, or other ways. But without the performer, that human being who originates the thing everybody wants to see, you have nothing. Printing, radio, television, motion pictures, phonographs, DVDs, the Internet, YouTube—all these things just give more people access to the performance, whatever it is. Now, it takes a certain amount of time and money to execute this multiplication—call it the marginal resource cost. What has happened over the last few decades is that the marginal resource cost for multiplying the performance has shrunk by many orders of magnitude. When you compare what the Bell System charged a major TV network in 1955 to operate its network transmission facilities (and factor in inflation)—probably the equivalent of many millions of dollars today—with what it costs some 14-year-old kid in Casper, Wyoming to make a video and put it on YouTube, you get some idea of how these marginal resource costs have collapsed. With some exceptions, the direction the technology has moved is to make more stuff available, for everybody, cheaper. So if there were no copyright laws at all, you’d get a situation in which few people would bother to do anything very good that requires a lot of resources (personal or financial), because they could never recoup their investment.

On the other hand, strong-arm tactics like the RIAA lawsuit against Jammie Thomas attempt to move things in the other direction, toward total, perpetual control of the performance by those who own it (not necessarily those who actually did it in the first place). Many people, including Stanford law professor Lawrence Lessig, think we have already gone too far in this direction, at least on paper. Copyright terms have been extended greatly in the last few years, to the point where many artists are worried that quoting or citing anything more recent than 1910 in print, music, or film will make them liable to a lawsuit. Part of this trend, no doubt, arises from a fear on the part of corporate copyright owners that if they don’t do something quick, everybody will digitize everything and just swap it around forever without anyone making a dime off any of it. These fears are no doubt exaggerated, and another part of the trend arises from a much simpler cause: greed.

Mixed up in all this are things like cultural traditions, expectations of private purchasers of entertainment media, technical standards and compatibilities, and many other factors which make copyright law such a happy hunting ground for lawyers. Certain acts of technological duplication in themselves should be made illegal. I don’t think anyone seriously disagrees with the principle that counterfeiting money should be against the law, even if you do it just to have some pretty pieces of paper to look at and you never intend to spend any of it. But attempts to make simple acts of technological multiplication illegal get into murky waters involving privacy, intentionality, and the tradition that what you do in your own home is your own business. The problem is as much political as it is technical, and politics, generally speaking, is not my beat. Still, there's enough engineering involved to make it worth thinking about in an engineering ethics blog.

This blog itself is an example of how nearly-free multiplication costs are used: I don’t pay to write it (except with my time and effort) and you don’t pay to read it. Still, I hope you get more than your money’s worth.

Sources: An article describing the Jammie Thomas case is at the Australian Broadcasting Corporation’s website at http://www.abc.net.au/news/stories/2007/10/05/2051724.htm?section=entertainment. Lawrence Lessig’s webpage is at www.lessig.com. And an interesting comparison between copyright law and the way magicians safeguard the secrets of their tricks appears in Tim Harford’s blog http://www.slate.com/id/2175616.

Monday, October 08, 2007

Losing By A Whisker: Lead-Free Solder and the Tin Whisker Problem

In 1998, the $250 million Galaxy IV geostationary communications satellite carrying millions of pager signals as well as the broadcast feeds of the CBS and NPR networks failed after only five years of service. Pager service wasn't restored for days and the company operating the satellite suffered considerable financial losses. Engineers determined that the problem was tiny tin whiskers that sprouted from soldered connections in the satellite's primary control processor. Because of a decision made by the European Union to prohibit the use of lead-based solder in electronics, we may see a lot more failures due to tin whiskers in the near future. How did the simple act of choosing electronic components become a complex moral issue? First, you need to understand something about tin whiskers.

When metals such as tin, zinc, and cadmium are under some kind of mechanical stress, one way they tend to relieve this stress is by sprouting tiny threads or sticks of metal called whiskers. They are very thin, much thinner than a human hair, and grow slowly over a period of months or years to a length of a few millimeters. But in the microminiature world of modern electronics, that distance is more than enough to bridge the gap between two terminals that will cause an equipment failure if shorted together. That is exactly what happened to the Galaxy IV satellite in both its primary and backup processor.

The whisker problem was first identified in the late 1940s, and since then engineers have found several ways to mitigate or eliminate it. Adding lead to tin plating or solder typically cures any whisker issues. Until very recently, the standard mixture of solder (the tin/lead alloy used to connect together most electronic components by melting it around terminals to be joined) was 60% tin and 40% lead. This alloy was reasonably inexpensive, had a low melting point, and served the electronics industry well for many decades.

In 2003, the European Union enacted a policy called Reduction of Hazardous Substances (RoHS, for short). This directive said that by July 1, 2006, most electronic products made or sold within the EU could not contain more than a very small amount of lead, cadmium, mercury, and a few other hazardous chemicals. Since the EU is a large market, and it is not practical for the thousands of electronics component manufacturers around the world to maintain two separate production lines, one for RoHS and another for non-RoHS products, this created a huge amount of turmoil in the industry as companies retooled their processes to eliminate lead from their solder, interconnection wires, plating processes, coatings, connectors, and everywhere else it was used. If you look in an electronic parts catalog these days you find "RoHS-compliant" labels on many if not most products, although non-RoHS stuff is still available, including the nasty old lead-bearing solder (which I have used, incidentally, since about the age of ten with no harmful effects). In fairness to the RoHS policy, the concern is not so much that people who use the electronics products are in any immediate danger of exposure, but that both at the manufacturing end and the recycling or disposal end, the lead can cause health problems. And that is an entirely legitimate concern.

But so is the problem of multi-million-dollar systems conking out because of tiny tin whiskers. The only commonly available RoHS-compliant solder, for example, is about 96% tin and 2% silver. Silver is not cheap, and so it costs about 50% more than the lead-bearing solder. It works all right—I've used some—but there is no lead in it to prevent the tin-whisker problem. And apparently there are few if any long-term studies of this new solder formulation that tell us how likely it is that joints soldered with it will need a shave in a few years.

The RoHS directive does exempt certain high-reliability systems such as medical devices from the no-lead requirements. But as some industry spokesmen point out, this is an empty gesture, because pretty soon it will be very hard to find any non-RoHS parts, for the simple reason that the market for them will dry up. NASA, for example, has good reason to be very concerned about the tin-whisker problem, since their satellites, and above all the Space Shuttle, contain electronic systems that are old enough to vote. So far, no life-threatening failure has occurred in the Shuttle due to tin whiskers, but the Shuttle has to keep going another two or three years at least before its commercial replacement may be available.

So what's an engineer to do? Well, the law is the law, and if your company makes or sells anything in the EU, it better comply with RoHS. As for systems that demand high reliability, there are ways around the whisker problem even if you have to use lead-free solder: wax or other impermeable coatings, proper spacing and insulating layers of other kinds, and so on. But many of these techniques are either largely untried or have problems of their own. That is what engineering is all about: solving problems. And the world will be a better place when new electronic products don't carry the burden of toxic heavy metals that they did in the past. But engineers now have to consider a new technical problem introduced by the well-meant, but perhaps technologically immature, RoHS directive. And we'll all be dealing with the consequences, perhaps in unexpected ways.

Sources: The Oct. 8, 2007 Austin American-Statesman carried an AP article by Jordan Robertson on how the high-tech industry is dealing with the challenges of tin whiskers and RoHS. Wikipedia's article "Whiskers (metallurgy)" gives a good description of the phenomenon and problems it can cause. The NASA Tin Whisker Homepage http://nepp.nasa.gov/whisker/ contains several pictures of actual whiskers and articles and presentations about the problem.

Monday, October 01, 2007

Battle of the Airways: How to Fix the FAA

Ladies and gentlemen! Your attention please! The Battle of the Airways is about to begin!

In this corner, we have The System. Hailed as a marvel of modern engineering when he debuted in the 1960s, The System has seen better days. Last week (Sept. 25, to be exact), he suffered a defeat at the hands of a failure in a telephone switch in Memphis, Tennessee. The scene was fantastic: air traffic controllers desperately punching numbers into their personal cellphones to call their cohorts in adjacent airspace control centers, because their radios went out and a good number of radar screens went blank, too. All flights were grounded within a 250-mile radius of Memphis, and it took the rest of the day for air traffic on the Eastern Seaboard to get back to what we call normal these days.

In this corner, we have ATA, the Air Transport Association. This airline trade association is ready to come out swinging, because they pay nearly all the taxes and fees that go to support The System. But a one-engine plane flying from Astabula, Ohio to a landing strip in an Iowa corn field takes as much or more resources from The System as a 747 pilot carrying over a hundred passengers, while paying hardly anything compared to the commercial flight.

In this corner, we have NATCA, the National Air Traffic Controllers Association. They're ready to punch somebody out before it's too late, because they've slimmed down way below weight—they've lost 10% of their numbers since 9/11/01, but air traffic's increased since then. NATCA, like The System its members operate, is getting older, smaller, and more poorly paid every day, if you believe what it tells you. And why would a fighter lie about a thing like that?

And last but not least, in this corner, we have John Q. Flying Public. Bigger than ever (individually and collectively), he's not happy about sitting in planes for hours on end and having flights canceled. Something's not right, he's pretty sure of that, but he doesn't even know who to go beat up on to fix the problem.

Waiting in the wings are the referees and the bookmakers: POTUS and Congress making the rules, and politicians and lobbyists betting on the outcome (metaphorically, we hope). The once-a-decade renewal of the FAA funding law that expired on Sept. 30, 2007 is a great opportunity for all the fighters to show their stuff. The only question is, who'll be the last man standing?

. . . Fighting is not a generally recognized way to solve complex technical disputes, but it looks like that may be how the FAA gets fixed—or doesn't, as the case may be. It may not have been a coincidence that in one week, we had a serious communications breakdown in the Memphis regional air traffic center, a Presidential statement about how the airlines had better get their act together or else, and the expiration of the current funding system for the Federal Aviation Administration, or FAA.

The technical problems are pretty clear. The present system was designed when the only way to track air traffic efficiently was with centralized radar systems that treated a 707 or a flock of birds the same way: a passive microwave-reflecting object. Identification, location, and tracking were all done either by hand or eventually by computer, but the ultimate channel through which information passed was the human air traffic controller.

That system worked great through the 70s and 80s, but as traffic has increased and newer technologies such as satellite-enabled global positioning systems (GPS) have become available, the old way of doing things has become increasingly cumbersome, unreliable, and even dangerous. Near-misses in the air are not an uncommon occurrence, and it was only by quick action on the part of already over-stressed air traffic controllers that the Memphis breakdown didn't result in a major tragedy.

Okay, we need to replace the system with a satellite-GPS-based automated one. Who's going to pay? Presently, most of the money that pays for the FAA's technology and staff (in good years, anyway) comes from ticket taxes, fees, and other sources which have little directly to do with the workload that each user represents. The Air Transport Association points out that the FAA is basically a utility, and like a water or electric company, most utilities should charge by the amount of services provided. But this is not what happens. As a result, the disconnect between funding sources and funding needs has given rise to a typical situation that often develops in government-provided services: lack of infrastructure investment and long-term planning.

How to fix it? Well, there's the good, sensible way—and the other way. The good, sensible way is for all parties involved—folks from all five or seven corners of our boxing ring, however many there are—to sit down, look at the system's needs for the next twenty years or so, figure out a big road map of how to get from here to there, and then find the money and resources to do it. This kind of thing happens all the time in private industry—the semiconductor industry, for example, has hewed closely to a roadmap of theirs that basically insures that Moore's "Law" keeps running year after year, and integrated circuits keep getting more and more complex. Airplanes aren't computer chips, but I'm talking about a planning process, not a technology.

That's the good way. The other way is to wait for a super-Memphis: something like the entire system freezing up and planes falling out of the sky, or flight delays all over the country that take a solid week to straighten out, or something equally as damaging to the airline industry as 9/11. It is my fond wish that something like this does not happen, and that the parties involved will get together and fix the problem the good way. But in a democracy, sometimes it takes a crisis to knock everybody's heads together enough to overcome differences and get things done.

Sources: A report on the Memphis breakdown can be found at the CNN website http://www.cnn.com/2007/US/09/25/memphis.air.snafu/index.html. A report of President Bush's comments on Sept. 27 about the airline industry is at http://money.cnn.com/2007/09/27/news/economy/bush_airlines.ap/index.htm. The Air Transport Association explains its view of FAA funding at http://www.smartskies.org/LearningCenter/faa_funding/default.htm, and the National Air Traffic Controllers Association explain some of their troubles at http://www.natca.net/mediacenter/press-release-detail.aspx?id=455.

Monday, September 24, 2007

Friends, "Friends," and Facebook

Last week, a lady named Sal who uses the social-networking website called Facebook showed a group of older professors (including yours truly) how the system works, what her own site looks like, and answered questions about it. Someone asked her how interactions with students through Facebook compares to dealing with them live and in person. She said some students will tell her things on her "wall" or in private messages on Facebook, that they would never mention in person. She finds that these students are rather more awkward socially than otherwise, but can open up and be quite interesting online.

This experience comes on the heels of an article by Christine Rosen, a senior editor at The New Atlantis, which is a quarterly devoted to issues of technology, ethics, and society. Rosen writes that friendship, a kind of personal interaction which has not fared that well in the modern era in the first place, may be suffering further decline as people trade the risks and uncertainties of face-to-face relationships for the reliability and controllability of online connections. If you tire of a person who's sitting in your room, we have not yet gotten to the point where you can acceptably say, "Go away, I'd rather not see you right now." But if you're reading your latest wall entries or your latest statistics on how many "friends" you have on Facebook, you can quit and do something else at any time and nobody else is the wiser—or gets their feelings hurt, either.

Facebook, of course, is a for-profit enterprise, and they are doing pretty much everything they can to increase the number of users beyond the current 34 million or so worldwide reported on Wikipedia. So it's understandable that the system is biased to encourage quantity of connections rather than quality. We've all known people who seem to collect relationships as others collect stamps or matchbook covers. To such people, you count mainly as a number, not as a unique individual.

To a computer, everybody counts only as a number, and that is only one way that computer-mediated interactions tempt us to objectify other people. If I know Joe Schmo mainly as a particular bizarre emoticon with a peculiar expression, the next time I think of Joe Schmo, the first thing that is likely to come to mind is that weird emoticon, not a living, breathing human being with his own history, likes, dislikes, hopes, and fears. But it was Joe who chose that emoticon, and for all I know, he likes for me to associate it with him, just as certain dramatic personalities in the past went around wearing capes and waxed moustaches for effect. In a larger and larger marketplace of potential friends, people will adopt more and more attention-grabbing disguises in order to get any traffic at all.

So in one sense, there is nothing new going on here. The reality of social networks—the thing you can diagram by writing names on a big sheet of paper and drawing lines between any two people who know each other—has been around since before history began. For people who get charged up by social interaction, joining Facebook may be like putting wings on a wildcat. For those of us (myself included) whose main sensation after meeting a boatload of new people is usually just a headache, Facebook's attractions may be harder to grasp. But for everybody who uses it, whether they're out simply to increase their number of friends or whether they are seeking the deepest and most profound relationship possible, the fact that their interactions on it are mediated by technology set up a certain way, will slant the nature of all those relationships in a way that favors quantity over quality.

There will be some people who try to abuse the system: stalkers, con artists, and so on, though according to Sal, Facebook is notably free of most such problems so far. And there will be more people who simply overuse it, like the students who neglect their homework and crash university servers when they buzz around on Facebook for hours upon end. But like the Internet itself, Facebook does put more people in touch with each other, in some fashion, than would otherwise be the case, or at least it looks that way so far.

All the same, I wonder whether someone like C. S. Lewis would have found much of a use for Facebook. As a student at Oxford he was fond of meeting a few intimate friends, nearly always male, with whom he would go on long walks in the hills and forests, discussing anything and everything, from what kinds of clothes they were made to wear when they were boys to the meaning of life. He also wrote letters, but it is clear from the journal he kept as a young man that the heart and soul of his friendships (many of which he maintained through most of his life) was conversation: sitting in a room together and talking. In a time when telephoning was mainly local and telegrams were used only when needed, he clearly regarded letters, phone calls, and other means of communicating with those not present as secondary substitutes for the real thing. I can't help but think that there is some deep preset bias in the human being that favors in-person conversation over all other forms. These other forms can be learned, used to mutual benefit, and abused as well. But if a person begins to prefer them over being in the same room with someone else, I also can't help but think that something is awry.

Sources: Rosen's article "Virtual Friendship and the New Narcissism" appears in the Summer 2007 issue of The New Atlantis, p. 15. C. S. Lewis's journal of the 1920s was edited by Walter Hooper and published as All My Road Before Me (HarperCollins, 1991).

Monday, September 17, 2007

Toying with Safety

Anybody who knows anything about the toxicity of lead paint has more sense than to put it on a kid's toy. But somehow, millions of toys painted in China carried detectable amounts of lead across the oceans and possibly into the mouths of children all over the U. S., and in other parts of the world too. Even small amounts of lead can affect a child's neurological development, and so the hue and cry over this problem is justified, by and large. I'd like to look at two questions regarding this issue: (1) how did it happen, and (2) how serious is it, really?

A complete story of the whole sequence of events is probably not available now and may not be until months or years of investigation are completed. But based on available evidence—namely, tests that show lead in paint and a knowledge of where the toys came from—I can imagine the following scenario. Government regulation in the Peoples' Republic of China is a sometime thing. About the only activity you can count on being universally suppressed everywhere in the country is political protest. But when it comes to industrial development, economic shortcuts, and evasion of taxes and other government regulations, there seems to be a kind of patchiness in effect that depends on where you are and who you know. Just to give you an idea of how strange things are over there compared to the U. S. business environment, one of the largest owners of factories and other industrial facilities is the army. A Chinese friend of mine who now lives in Hong Kong described the situation to me a few years ago as "the wild wild West."

Given such a free-wheeling environment, it isn't surprising that an ambitious toy-factory owner looking to save a few yuan on his supply costs would buy paint from a source who would either lie about its chemical makeup, or simply not know. If it looked good and stayed on the toys, the paint was fine as far as he was concerned.

Although Mattel Inc. has come across looking like the bad guy in many news reports, to their credit they appear to have taken most of the right actions, once they became aware of the problem. That does leave the question of how thorough their product safety testing was, if millions of toys slipped through it before the first lead was found. Clearly they were not testing as extensively as they are now, but now CEO Bob Eckert realizes his company is fighting for survival. In a video on the company website, he apologizes abjectly and shows laboratory scenes of people in white coats taking samples from toy trucks to test for lead content. Clearly, for a while someone was using lead on toys made in China and imported by Mattel, and nobody who could do anything about it knew. This is not an engineering problem as much as it was a management and information problem, but engineering is also about management and information. All the technical smarts in the world won't produce safe products if an organization can't use those smarts to protect consumers, and itself, from harm. Mattel's current vigilance, along with the possibility of tightened Federal regulations, will probably clear up this problem eventually, or at least make it much less likely to recur.

That being said, how serious was it? While no child should be exposed to lead in his or her environment, the paint problem itself has not caused any known fatalities. This was not the case in a parallel episode that took place in Europe in the 1800s. Around 1820, the technology of printing and paper manufacture advanced enough to make wallpaper a popular new interior decorating option. One of the most-used dyes in the new industry was something called Paris green, based on the chemical copper arsenate. Bedbugs were a big problem back then, and people who bought green wallpaper noticed a side benefit, which was that in bedrooms where they'd put up the wallpaper, you never had problems with bedbugs. Now and then, especially in damp weather, the wallpaper gave off a slight garlicky odor, but standards of sanitation back then weren't what they are now, and that might have been a selling point too compared to other things you could smell in a house around that time.

Then there began to surface some rumors that people who lived in the bedrooms with green wallpaper often got a mysterious illness and eventually died. Statistical epidemiology was in its infancy back then, but something looked fishy enough to the Prussian government that by 1838, they prohibited the use of poisonous substances in wallpaper. But most other countries shrugged off the issue and the mystery continued until 1897. In that year a chemist named Gosio showed that the starch in wallpaper pasted encouraged the growth of a mold in damp weather that turned the copper arsenate in green wallpaper into a gas which we now know as trimethylarsenate. It smells like garlic and will kill you if you breathe enough of it. That was enough to put an end to the use of Paris green in wallpaper for good, although it continued to be sold as an insecticide for years until newer organic compounds replaced it.

The moral from that little story is that ignorance of the technical principles behind a safety problem can slow down its solution for decades. We've known about the hazards of lead paint for many years, so ignorance was no excuse in this case. All the same, if you compare Mattel's problems with the green-wallpaper story, I'd say it's like comparing a fender-bender to a five-car freeway pileup that resulted in a fire and eight fatalities. No, you shouldn't even have fender-benders, but there are worse things that can happen than fender-benders.

Sources: The Mattel recall has been reported extensively at sites such as MSNBC.com, where an AP story appeared on Aug. 14 at http://www.msnbc.msn.com/id/20254745/. Mattel CEO Bob Eckert's apology can be viewed at http://www.mattel.com/safety/us/. I am indebted to a geochemistry instructor named Moore (possibly Johnnie Moore) at the University of Montana, whose course notes at http://www.umt.edu/geosciences/faculty/moore/G431/lectur17.htm contain the green-wallpaper story.