Monday, January 29, 2018

Facebook's Frankenstein Effect


The Frankenstein story, as so vividly penned by Mary Shelley in 1820, came at the dawn of the Industrial Revolution which brought the fruits of scientific knowledge to the masses.  Victor Frankenstein's sub-creation monster turns against him, and the scientist and inventor rues the day he brought it to life.

At a November 2017 conference in New York City sponsored by the Clinton Foundation, two inventors who were there at the creation of Facebook expressed similar regrets for what they had created.  In doing so, they became only the latest in a long series of technical types who have expressed various degrees of regret and guilt for creating new media such as radio, television, and Facebook.

Sean Parker served as the first president of the social-media giant Facebook, and when someone at the conference asked about the effects of Facebook on society, he recalled the thinking that went into the system's design.  His reply deserves quotation at length:

"You know, if the thought process that went into building these applications, Facebook being the first of them to really understand it, that process was all about, 'How do we consume as much of your time and conscious attention as possible?  That means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or post or whatever, and that's going to get you to contribute more content, and that's going to get you more likes and comments, you know, it's a social-validation feedback loop . . . It's exactly the kind of thing that a hacker like myself would come up with because you're exploiting a vulnerability in human psychology." 

Another speaker at the conference, a former Facebook developer, when asked if he had done some soul-searching concerning his role in the creation of Facebook, said, "I feel tremendous guilt. . . . I think we have created tools that are ripping apart the social fabric of how society works." 

Strong words.  In deploring what happened to their technically sweet ideas, these inventors and entrepreneurs remind me of the words of Lee De Forest, who invented the vacuum tube which made radio broadcasting possible.  In his later years, he became disgusted at what radio had become, and in 1940 wrote an open letter to the National Association of Broadcasters in which he protested, "What have you done with my child, the radio broadcast?  You have debased this child . . . "  Vladimir Zworykin, who developed the first practical electronic television system for RCA in the 1930s, had nothing good to say about what it had become by the 1970s, and rarely watched TV himself.  And Harold Alden Wheeler, a prolific radio and TV engineer and inventor, was well known for forbidding his family to watch TV at all.

What is it about engineers and software developers that makes them so sensitive to the negative impacts of their successful inventions?  After all, Facebook does a lot of good too, in connecting families and friends separated by geography and letting people keep in touch who otherwise might not.  In fact, some who deplore the parlous state of our public discourse in the era of Facebook flaming and Presidential tweets look back with fondness to those good old days when electronic news happened only once a day at 6 PM on only three TV channels, and everybody heard more or less the same thing, carefully filtered through professional media editors.  But that was the very same television programming that Zworykin and Wheeler deplored.

People who imagine things before they are created have to believe in them strongly, and believe that their creations will do some good—will do at least themselves good, and also perhaps other people as well.  Only Sean Parker knows exactly what was going on in his mind when he cooked up first Napster and then contributed to the beginnings of Facebook.  But by his own testimony, he was basically hacking the human brain—taking advantage of the little squirt of dopamine most people get when they see that someone out there has acknowledged their existence positively, by sending an email, text, or a "like" on Facebook.  Multiply those squirts by the millions every day, and there is the psychological engine that drives Facebook and most other social media.

By some standards, Sean Parker has nothing to complain about. He doesn't feel so guilty about Facebook that he has divested himself of the several billion dollars it has earned him.  But it is rare to find people who have both devoted years of their lives to becoming technically proficient in a narrow field, and who can also take a wise, broad view of all the potential effects of their technical developments, both positive and negative, before they are developed.  So when an idea of theirs takes wings and flies away like Facebook did, and in the natural course of events gets some people into trouble, they are disappointed, because they only imagined the good things that would happen as a result, not the bad things. 

Any technology that is used by a large enough number of people is going to be used badly at some point, because the only Christian doctrine that is empirically verifiable is going to come into play:  the doctrine of original sin.  The culpability of the technology's developers depends on what they were trying to do to begin with.  Wanting to connect people, and even getting rich, are not necessarily bad motives.  But once the technical cat is out of the bag, inventors can at least try to do what they can to mitigate the harmful effects of their technologies.  After Alfred Nobel learned that what he would mostly be remembered for was the death and destruction wrought by his invention of dynamite, he hastily set up the Nobel Prizes partly as a kind of penance or compensation to humanity for the evil that his invention had done. 

In 2015, Parker set up the Parker Foundation, a charitable organization whose focus includes civic engagement.  Perhaps by this means, Parker and others like him can try to repair some of the social damage they see Facebook and other social media doing.  The Nobel Prizes did not put an end to war, and I don't expect the Parker Foundation is going to lead on its own to a new era of sweetness and light in public discourse.  But at least he's trying.

Sources:  Recordings of the interviews from which the two quotations from Parker and his associate were taken are available on the website https://relevantradio.com/2018/01/drew-mariani-show-jan-24-hour-1/, approximately at minutes 16 through 20.  A web report citing the same interview with Parker can be found at https://www.cnbc.com/2017/11/09/facebooks-sean-parker-on-social-media.html.  The information about Zworykin is from http://ethw.org/Vladimir_Zworykin.  Lee De Forest's words on radio broadcasting can be found in the Wikipedia article on him.  I also referred to the Wikipedia articles on Sean Parker and Harold A. Wheeler.

Monday, January 22, 2018

Can Artificial Intelligence Make Art?


In February's Scientific American, technology columnist David Pogue wonders if human artists and composers should start worrying about a new development in artificial intelligence (AI):  the automated composition of music and production of paintings.  Computer scientist Ahmed Elgammal's Art and Artificial Intelligence Lab at Rutgers University is developing algorithms that start with well-known famous works of real art and abstract elements of style and composition from them.  Then the machine can either be set to do imitations in the same style, or else he turns on a "style ambiguity signal" that forces the digital Rembrandt to deviate from the style it's learned. 

I have viewed some of these products on the lab's website, and while I make no claims to be an art critic, my impression is that Rembrandt doesn't have much to worry about.  In fairness to Elgammal, he doesn't claim that what his system is doing is just as good as human-created art.  Rather, he sees himself as exploring theories of art creation using AI to see what happens if style rules are either slavishly followed or intentionally broken.

When he mixed the products of his AI "artist" with works by actual humans in a couple of different sets—abstract expressionism and contemporary art from a recent European art show—he found that people who viewed the artworks without knowing which was by a human and which was by a computer, often picked the computer-generated ones as more "intentional, visually structured, communicative, and inspiring" than those made by unaided humans.  He was surprised by this outcome, but he shouldn't have been.

Most people will agree that much visual art that is bought and sold for millions of dollars today doesn't look much like the artworks that were painted, say, a hundred and fifty years ago. Elgammal has happened to come along with his AI artist at a time when the so-called standards for what constitutes art are all but nonexistent.  Last year The New Yorker carried a story about a young man named Jean-Michel Basquiat who mainly wanted to be famous.  He tried music as a path to fame first, but was discouraged by the fact that it takes years of practice to become even an adequate musician.  So he switched to art.  Starting with graffiti, he attracted the attention of the art world, rocketed to international fame, and died of a heroin overdose at the age of 27.  The magazine's art critic Peter Schjeldahl thinks that his art is worth looking at, but probably not worth paying $110 million for, as a Japanese business man did last May for a Basquiat work from 1982.  Schjeldahl himself described it as a "medium-sized, slapdash-looking painting of a grimacing skull."  Judging by the photograph in the article, that's a pretty accurate description.

My point is that what passes for art these days is a departure from what has passed for art in the past, well, several thousand years.  Up until the nineteenth century, artistic works represented both recognizable objects, and also the higher operations of the human mind and spirit, operations that distinguish human beings from the lower animals.  G. K. Chesterton regards the production of art as one of the primary distinctions between people and other animals, and points to the cave paintings such as those in Lascaux, France, as being evidence that those who painted them were humans like us. 

One chronic concern that arises as AI advances into more areas of endeavor formerly regarded as exclusively human, is that when AI starts to do a certain kind of thing better and cheaper than people, what will happen to the people who earn their living doing it now?  So far, humanity has survived the replacement of telephone operators by automatic dialing, elevator operators by pushbutton elevators (everywhere except at the United Nations building, I'm told!), and more recently, the advance of AI into the professions of engineering, medicine, and even law.  Right now, the unemployment rate in the U. S. is at a historic low, but that is due mainly to an economy that is close to overheating, and doesn't take into account the millions of people who neither look for work or are particularly troubled that they're not working.  And here is where we find the real matter to be concerned about.

The issue isn't whether AI will send some artists to the unemployment line.  The real issue is how we regard art and how we regard humanity.

When Chesterton wrote in 1925 that "Art is the signature of man," he didn't mean just any random scrawl.  He had a particular thing in mind, namely, that the portrayal of nature as interpreted by the human spirit is unique to man.  Certainly no other animal produces anything that is generally regarded as a work of art.  I am aware of the bowerbirds of Australia and New Guinea which construct large elaborate arches of sticks and decorate them with blue objects and sometimes even paint the walls.  But this is simply instinctive behavior directed at attracting a mate.  No one has seen bowerbirds exchanging worms for a particularly fine bower and signing bills of sale. 

If people today can't seem to tell the difference between computer-generated art and human-generated art, the reason isn't that the computer is now as artistic as a human artist.  The problem is that artists have degraded their craft to the level of a machine-made product, and taught the general public that yes, that is indeed art even if I tied brushes to two turtles and let them crawl across the canvas.  When Marcel Duchamp tried to exhibit an ordinary urinal as art in a 1917 New York art show, the show's committee rejected it, but photographer Alfred Stieglitz allowed him to put it up in his studio.  In 2004, 500 "renowned artists and historians" reportedly selected this work, called simply "Fountain," as the most influential artwork of the twentieth century.  And it was made by a machine.

Sources:  David Pogue's column "The Robotic Artist Problem" appeared on p. 23 of the February 2018 issue of Scientific American.  Some creations of Prof. Elgammal's AI artist can be viewed at https://sites.google.com/site/digihumanlab/home.  (The ones with "style ambiguity" turned off, at https://medium.com/@ahmed_elgammal/generating-art-by-learning-about-styles-and-deviating-from-style-norms-8037a13ae027, are truly creepy.) A brief introduction to Jean-Michel Basquiat can be found on the New Yorker website at https://www.newyorker.com/culture/cultural-comment/lets-pause-to-appreciate-basquiats-hundred-million-dollar-painting.  Chesterton's comments about cave paintings are from pp. 30-34 The Everlasting Man, reprinted in 2008 by Ignatius Press (originally published 1925).  And I also referred to Wikipedia articles on Marcel Duchamp and Fountain. 


Monday, January 15, 2018

Russian Interference in Elections: Fancy Bear is Not Exactly What We Had in Mind


Excuse the long title, but whenever humorist Roy Blount Jr. would run across something totally contrary to his expectations, he would say mildly, "Well, that's not exactly what I had in mind."  By a convoluted series of circumstances, we in the U. S. have become vulnerable to election interference by a foreign power in a way that few people anticipated.  This is a lesson in how novel technologies and aggressions can outwit both legislators and organizations dedicated to preventing such aggressions.  And novel countermeasures—some of them possibly costly in both money and convenience—may be needed to deal with them.

Historically, it has been difficult for non-U. S. citizens or foreign countries to interfere with U. S. elections.  While the fear of such interference has always been present to a greater or lesser degree, my amateur historical memory does not bring to mind any significant cases in which a foreign power was clearly shown to have acted covertly in a way that provably influenced the outcome of a national election.  Laws prohibiting foreign campaign contributions acknowledge that the danger is real, but if such interference happened in the past, it was so well concealed that it never got into the historical record. 

Ever since there were governments, there have been privileged communications among those in power which, if disclosed in public, might prove to be embarrassing or even illegal.  But until recently, these communications took place either by word of mouth, by letter and memo, or by phone.  And considerable espionage work has to be done to intercept such communications.  You have to have a spy or a listening device in place to overhear critical private discussions.  You have to steal or secretly photograph written documents, and you have to tap phone lines.  All of these activities were by necessity local in nature, meaning that a foreign power bent on obtaining embarrassing information that could sway an election had to mount a full-scale espionage program, with boots on U. S. soil, and take serious risks of being caught while engaged on a fishing expedition that might or might not reveal any good dirt, so to speak. 

Then came the Internet and email.

While much email physically travels only a few miles or less, it passes through a network in which physical distance has for all intents and purposes been abolished.  So if I email my wife in the next room, somebody in Australia who simply wants to know what I'm emailing can try to hack into my emails and, if successful, can find out that I'm asking her to get crunchy raisin bran at the store today.  Nobody in their right mind would bother to do such a thing, but the Internet and email have made it hugely easier to carry out international spying on privileged communications of all kinds.  The kinds of spying that used to be done only in wartime by major powers can now be done by a few smart kids in some obscure but hospitable country.  And here is where Fancy Bear comes in.

A private security firm in Japan has discovered signs that the same group probably responsible for hacking the Democratic Party's emails during the 2016 elections is trying to mess with the Congressional elections coming up later this year.  An elaborate mock-up of the internal Senate email system has been traced to this so-called Fancy Bear group, which evidently has ties to Russia.  Such a mockup would be useful to entrap careless Senate staffers who might mistakenly reply to an email that looks legitimate, but is in fact a kind of Trojan horse that would allow the Russians (or their minions) access to all further emails sent through what looks like a legitimate site, but is in fact a trap. 

I am not a cybersecurity expert and won't speculate further on how the Fancy Bear people do their dirty work.  But the fact that they are still out there working to steal emails and release them at times calculated to throw U. S. elections one way or the other, brings to mind two things that we need to consider.

1.  Messing with electronic voting is not the main cyber-threat to our election system.  Much concern has been expressed that electronic voting systems are not as secure as they should be.  While this is probably true, it doesn't appear to be a significant problem that has actually resulted in thrown elections, except perhaps in small elections at the local level, and usually by accident rather than by design.

2.  We may have to trade some Internet freedom for security in guarding U. S. elections against foreign interference.  The moral innocents who designed the Internet back in the 1970s made the mistake of assuming that everybody who would use it was just like themselves, or rather, their polished-up image of themselves:  sincere, forthright, open, and filled with only good motives.  One wishes that the concept of original sin had been included in every computer-science curriculum since the discipline began in the 1960s, but that isn't the case.  The radically borderless and space-abolishing nature of the Internet brings foreign threats and interference to everyone's doorstep.  With the click of a button in Uzbekistan, Maude in Indianapolis can read the latest fabricated scandal on Facebook about the guy she was thinking of voting for, or hear on the news that his private emails to his mistress have been posted on Wikileaks. 

Not that I condone elected officials who have mistresses.  But these are examples of the kinds of things that can go on once everybody routinely uses a medium which, under present circumstances, is about as private as yelling your credit card number to somebody on the other side of Grand Central Station.

To make email as secure as the U. S. Postal Service, we obviously require more rigid and well-organized security protocols than we have had up to now.  My own university has recently gone to a two-step verification system that is inconvenient, but greatly heightens the security of certain privileged communications such as entering grades.  It may be time for everyone concerned in elections—political parties, governments, and private citizens—to agree to some kind of inconvenient but more secure email approaches, applied uniformly with government regulation if necessary, so that we can get back to where we were in terms of preventing outsiders from interfering with our most characteristic action as a democracy—electing those in power.

Sources:  The AP report by Raphael Satter "Cybersecurity firm:  Senate in Russian hacker crosshairs" was published on Jan. 12 and carried by numerous papers, including the Washington Post at https://www.washingtonpost.com/business/technology/cybersecurity-firm-us-senate-in-russian-hackers-crosshairs/2018/01/12/150ca956-f799-11e7-9af7-a50bc3300042_story.html. 

Monday, January 08, 2018

Meltdown and Spectre: Sometimes the Good Guys Win


Most computer viruses and bugs go for particular operating systems, Windows being the most popular, because it's on the majority of PCs.  So Mac users, although occasionally suffering their own kinds of attacks, usually breathe a sigh of relief every time a major PC-only virus hits the news. 

But over the weekend, you may have heard about a pair of bugs called Meltdown and Spectre that go for hardware, not software.  In particular, Meltdown is a vulnerability associated with Intel processors made since 1995, and the dominance of Intel means Macs, PCs, and most you-name-it computers are potential targets.  Spectre reportedly is even worse.  But the key word here is "potentially."  In an announcement, Apple claimed that no known malicious hacks have actually been committed using either of these bugs.  And by the time the general public learned about them, the major computer and software makers were already well on their way to devising fixes, although the fixes may have their own drawbacks.

The reason no bad guys have apparently used these bugs is that they were discovered independently by computer researchers in Austria and the United States.  And following a policy called "responsible disclosure," the researchers notified Intel that their chips were vulnerable to these bugs.  So until now, apparently the criminal elements of the computer world either didn't know of the bugs or didn't use them.

I am not a computer scientist, but the technical details of how Meltdown happens are interesting enough to try to summarize.  Apparently, some years back chip designers started doing certain things to speed up the use of what is called "kernel memory."  If you think of the kernel as a little homonculus guy (call him the Kernel) sitting in the control room doing the computer math, the trick they were playing with the Kernel's memory amounts to having other homonculus-people in the room guess at what the Kernel's going to want to do next, and bring stuff out of memory so it can be waiting for him when he needs it.  And all this stuff has to be secure from outside spying, so there's even security checks done way inside the control room there. 

But Meltdown evidently exploits some little timing gap between the moment the contents of memory get there and the moment they are certified as secure.  It's like some spy taking a picture of the secret document during the few seconds between its arrival in the room and when it's put into the "Top Secret" box.  I'm sure some computer scientists are having a good laugh at my pitiful attempt to describe this thing, but that's the impression I got, anyway.

So there are two ways to fix it:  redesign the hardware or write a software patch and put it in upgrades.  Obviously, if you're running older hardware, you're not going to rip out your Intel processor and put in the new one once they've designed the flaw out of it.  So the only practical thing right now is installing software fixes, which evidently will be included in standard operating system upgrades for PCs and Macs. 

Realistically, though, it appears that actually using these bugs to steal data is very tricky, and that is probably why nobody has discovered evidence that they've ever been used maliciously.  But even if they haven't, everybody knows about them now, and so theoretically a non-upgraded Mac could be spied on without a trace.  I'll put upgrading my OS on my to-do list for the new year, anyway.

This whole episode puts a highlight on the question of what computer researchers do when they discover flaws that no one else had suspected.  We can be grateful that Daniel Gruss and his colleagues at Austria's Graz Technical University, and Jann Horn at Google's Project Zero, who independently discovered the bugs as well, did the responsible thing and informed Intel and company of the problems as soon as they found they could be exploited. 

But it's not that hard to imagine what might have happened if some criminal groups, or worse, a state bent on cyber-warfare, had discovered these flaws first.  There are countries where both highly advanced computer science research is going on, and where researchers would be encouraged not to notify the manufacturers in the U. S., but to inform their government's military of such discoveries for use in future cyberattacks.  It's a little bit like thinking what World War II would have been like if Hitler hadn't chased away most of Germany's leading nuclear physicists, and he had gotten hold of nuclear weapons before the Allies did.

Recently I saw "Darkest Hour," the film about Winston Churchill during the crucial days in May of 1940, as Hitler's armies were overwhelming continental Europe and Churchill accepted the post of Prime Minister of England.  Things looked really bad at the time, and many powerful people advised him to give up the fight as hopeless and settle with Hitler before all was lost.  But needless to say, Churchill made the right decision and rallied Parliament with his famous speech in which he declares "We shall never surrender."

It's easy to get all nostalgic over times when issues were more clear-cut, and the only kinds of military threats were physical things like guns, airplanes, and bombs.  Not that World War II was a picnic—it was the worst self-inflicted cataclysm humanity has devised so far.  And tragic times make heroes, as World War II made a hero of Churchill and millions of otherwise ordinary people who lived through that extraordinary time.

But we have similar heroes working among us even today.  For every researcher and scientist who worked on nuclear weapons, radar, or other advanced military technologies back then, we have people like Gruss and Horn now who discover potential threats to the world's infrastructure and turn them over to those who will mitigate them, not exploit them for evil ends.  So here is a verbal bouquet of thanks to both them and other computer wonks who use their discoveries for good and not evil.  May their tribe increase, and may we never have cause to watch a future reality-based movie about how some nasty computer virus killed thousands before the good guys figured out how to stop it.

Sources:  I referred to articles on Meltdown and Spectre carried on the BBC website at http://www.bbc.com/news/technology-42575033 and a report on ZeroHedge.com describing how the bugs were discovered at https://www.zerohedge.com/news/2018-01-05/meltdown-story-how-researcher-discovered-worst-flaw-intel-history, as well as the Wikipedia article "Meltdown (security vulnerability)."

Monday, January 01, 2018

Thank God for Gravity: Scott Kelly's Endurance and the Future of Space Travel


Scott Kelly is a NASA astronaut, veteran of a year's stay on the International Space Station (ISS), and now a published author of a popular memoir called Endurance.  He is also the twin brother of fellow astronaut Mark Kelly, who is married to former Rep. Gabrielle Giffords, who survived an assassination attempt in 2011.  At the time Giffords was shot, her brother-in-law was in orbit during an earlier ISS stay.

Needless to say, Kelly has led an eventful life, and his memoir is rather unusual in that he doesn't shy away from matters that reflect badly on either him or aspects of the space program.  He is honest about many of his shortcomings, including his first marriage that ended in divorce.  And when in the course of narrating in detail his experiences in space, he is inconvenienced by a NASA policy or action, he lets you know about it.  The part of the book that describes day-to-day life on the ISS has got to be one of the most detailed and vivid descriptions of space flight in print.  And that's the problem.  If he wrote this book to encourage people to think about mass migration to space, it may have backfired.

To a landlubber like me, Kelly's trials and risks he undergoes to be in space are appalling.  Take what sounds like a simple thing:  a space walk.  First off, it takes about five hours to get ready, involving hundreds of separate checklisted steps, many of which if neglected or done in the wrong order could result in your untimely and painful demise. 

Then there is the zero-G environment.  I never really appreciated gravity until I read Kelly's book.  On earth, you put down a pen, or a wrench, or a screw, and it stays there.  Not in space.  Every single last thing you might possibly need has to be either tied down, kept in a bag, stuck to a piece of Velcro (TM), or otherwise secured, or else on the space walk you will inadvertently contribute to the already vast quantity of space junk orbiting Earth, and lose whatever you needed in the bargain.  The most chilling aspect of his space walks is to learn that over the nearly two decades that at least some of the ISS has been up there, meteors or orbiting pieces of derelict satellites have punched holes and taken entire chunks out of handrails on the outside of the structure.  And it's just a matter of chance whether another one of those 17,000-MPH pieces of debris drills a hole through you while you're outside. 

Kelly, along with anyone else who endures the rigors of years of training and competition to go into space, deserves accolades for his monumental achievements.  But at the same time, I can't help but wonder whether the up-close view of what life in space is really like lends more weight to the argument that, like many people say of New York, it's a nice place to visit but I wouldn't want to live there. 

The question really boils down to this:  is space, and whatever lies beyond in terms of potentially habitable planets, really more like America in 1620, or Antarctica in 1920?  Here's what I mean.

Right now, corporations are being organized to go into space exploration commercially, and large groups of visionaries are planning to spread humanity in some form to other planets.  From what I can tell, these folks believe that space and regions beyond it will eventually harbor lots of people, like the New World (North and South America) does now.  Some even seem to think that we have damaged our planet here beyond repair with global warming and pollution, and we better start making our plans for an exit strategy when Earth becomes uninhabitable.  Either way, these folks (who Kelly, incidentally, does not explicitly identify himself with) have an attitude toward space that says it is our manifest destiny to go there and occupy it with all the trappings of civilization:  cities, nations, the whole bit.  They would say that space now is like America was to Europe in 1620:  a new world beckoning us to explore and settle it.

If Kelly's spare-time reading had been about Columbus or Vasco da Gama, I might agree that he is of that party.  But what book did he take with him on the ISS?  Endurance:  Shackleton's Incredible Voyage, the story of Antarctic explorer Ernest Shackleton's ill-fated 1914 expedition to cross the Antarctic.  Their ship got stuck in ice and Shackleton and his men ended up floating away on chunks of sea ice and lifeboats.  In his choice of reading matter, I think Kelly has inadvertently answered the question of what space exploration is most similar to in the history of humanity so far.  And it doesn't bode well for any large-scale plans that involve moving lots of ordinary people into space.

There have been manned outposts in the extreme Arctic and Antarctic regions for about a century now, and the population of Antarctica still hovers in the hundreds at most.  The fact is that the environment there is so hostile to human life that living there is extremely expensive, inconvenient, and worth while only if a strong scientific or cultural motive justifies it. 

I think space is the same way.  It's hard to imagine how we could make space travel so safe, convenient, and comfortable that you could get lots of people (I'm talking thousands at least) to attempt it.  And by definition, you have to travel through space to get to anywhere besides Earth. 

So I salute Kelly and his compatriots for the incredible achievements they have made in simply keeping the ISS running and keeping alive up there, no small part of which is due to Kelly's accumulated expertise in repeatedly fixing what has to be the world's most expensive toilet.  But by the same token, I think space travel will remain a hugely expensive and highly specialized endeavor for a narrowly chosen few, for the foreseeable future—and maybe beyond that, too.

Sources:  I thank my wife for giving me Scott Kelly's Endurance:  A Year In Space, A Lifetime of Discovery (Knopf, 2017) for Christmas.