Documente online.
Zona de administrare documente. Fisierele tale
Am uitat parola x Creaza cont nou
 HomeExploreaza
upload
Upload




'Midnight on the Rue Jules Verne'

books


"Midnight on the Rue Jules Verne"

A kind of SF folk tradition surrounds the founding figure of Jules Verne. Everyone knows he was a big cheese back when the modern megalopolis of SFville was a 19th-century village. There's a bronze monument to him back in the old quarter of town, the Vieux Carre. You know, the part the French built, back before there were cars.



At midnight he stands there, somewhat the worse for the acid rain and the pigeons, his blind bronze eyes fixed on a future that has long since passed him by. SFville's citizenry pass him every day without a thought, their attention fixed on their daily grind in vast American high-rises; if they look up, they are intimidated by the beard, the grasped lapel, the flaking reek of Victorian obsolescence.

Everyone here knows a little about old Jules. The submarine, the moon cannon, the ridiculously sluggish eighty days. When they strip up the tarmac, you can still see the cobbles of the streets he laid. It's all still there, really, the village grid of SFville, where Verne lived and worked and argued scientific romance with the whippersnapper H.G. Wells. Those of us who walk these mean streets, and mutter of wrecking balls and the New Jerusalem, should take the time for a look back. Way back. Let's forget old Jules for the moment. What about young Jules?

Young Jules Verne was trouble. His father, a prosperous lawyer in the provincial city of Nantes, was gifted with the sort of son that makes parents despair. The elder Verne was a reactionary Catholic, given to frequent solitary orgies with the penitential scourge. He expected the same firm moral values in his heir.

Young Jules wanted none of this. It's sometimes mentioned in the SF folktale that Jules tried to run away to sea as a lad. The story goes that he was recaptured, punished, and contritely promised to travel henceforth "only in his imagination." It sounds cute. It was nothing of the kind. The truth of the matter is that the eleven-year-old Jules resourcefully bribed a cabin-boy of his own age, and impersonated his way onto a French merchant cruiser bound for the Indies. In those days of child labor, the crew accepted Jules without hesitation. It was a mere fluke that a neighbor happened to spot Jules during his escape and informed against him. His father had to chase him down in a fast chartered steam-launch.

This evidence of mulishness seems to have thrown a scare into the Verne family, and in years to come they would treat Jules with caution. Young Jules never really broke with his parents, probably because they were an unfailing source of funds. Young Jules didn't much hold with wasting time on day-jobs. He was convinced that he was possessed of genius, despite the near-total lack of hard evidence.

During his teens and twenties, Jules fell for unobtainable women with the regularity of clockwork. Again and again he was turned down by middle-class nymphs whose parents correctly assessed him as an art nut and spoiled ne'er-do-well.

Under the flimsy pretext of studying law, Jules managed to escape to Paris. He had seen the last of stuffy provincial France, or so he assumed: "Well," he wrote to a friend, "I'm leaving at last, as I wasn't wanted here, but one day they'll see what stuff he was made of, that poor young man they knew as Jules Verne."

The "poor young man" rented a Parisian garret with his unfailing parental stipend. He soon fell in with bad company--namely, the pop-thriller writer Alexandre Dumas Pere (author of Count of Monte Cristo, The Three Musketeers, about a million others). Jules took readily to the role of declasse' intellectual and professional student. During the Revolution of 1848 he passed out radical political pamphlets on Paris streetcorners. At night, embittered by female rejection, he wrote sarcastic sonnets on the perfidy of womankind. Until, that is, he had his first affair with an obliging housemaid, one of Dumas' legion of literary groupies. After this, young Jules loosened up to the point of moral collapse and was soon, by his own admission, a familiar figure in all the best whorehouses in Paris.

This went on for years. Young Jules busied himself writing poetry and plays. He became a kind of gofer for Dumas, devoting vast amounts of energy to a Dumas playhouse that went broke. (Dumas had no head for finance--he kept his money in a baptismal font in the entryway of his house and would stuff handfuls into his pockets whenever going out.)

A few of Jules' briefer pieces--a domestic farce, an operetta--were produced, to general critical and popular disinterest. During these misspent years Jules wrote dozens of full-length plays, most of them never produced or even published, in much the vein of would-be Hollywood scriptwriters today. Eventually, having worked his way into the theatrical infrastructure through dint of prolonged and determined hanging-out, Jules got a production job in another playhouse, for no salary to speak of. He regarded this as his big break, and crowed vastly to his family in cheerful letters that made fun of the Pope.

Jules moved in a fast circle. He started a literary-artistic group of similar souls, a clique appropriately known as the Eleven Without Women. Eventually one of the Eleven succumbed, and invited Jules to the wedding. Jules fell immediately for the bride's sister, a widow with two small daughters. She accepted his proposal. (Given Jules' record, it is to be presumed that she took what she could get.)

Jules was now married, and his relentlessly unimaginative wife did what she could to break him to middle-class harness. Jules' new brother-ln-law was doing okay in the stock market, so Jules figured he would give it a try. He extorted a big loan from his despairing father and bought a position on the Bourse. He soon earned a reputation among his fellow brokers as a cut-up and general weird duck. He didn't manage to go broke, but a daguerreotype of the period shows his mood. The extended Verne family sits stiffly before the camera. Jules is the one in the back, his face in a clown's grimace, his arm blurred as he waves wildly in a brokerage floor "buy" signal.

Denied his longed-for position in the theater, Jules groaningly decided that he might condescend to try prose. He wrote a couple of stories heavily influenced by Poe, a big period favorite of French intellectuals. There was a cheapo publisher in town who was starting a kid's pop-science magazine called "Family Museum." Jules wrote a couple of pieces for peanuts and got cover billing. The publisher decided to try him out on books. Jules was willing. He signed a contract to do two books a year, more or less forever, in exchange for a monthly sum.

Jules, who liked hobnobbing with explorers and scientists, happened to know a local deranged techie called Nadar. Nadar's real name was Felix Tournachon, but everybody called him Nadar, for he was one of those period Gallic swashbucklers who passed through life with great swirlings of scarlet and purple and the scent of attar of roses. Nadar was involved in two breaking high-tech developments of the period: photography and ballooning. (Nadar is perhaps best remembered today as the father of aerial photography.)

Nadar had Big Ideas. Jules' real forte was geography--a date-line or a geodesic sent him into raptures--but he liked Nadar's style and knew good copy when he saw it. Jules helped out behind the scenes when Nadar launched THE GIANT, the largest balloon ever seen at the time, with a gondola the size of a two-story house, lavishly supplied with champagne. Jules never rode the thing--he had a wife and kids now--but he retired into his study with the plot-line of his first book, and drove his wife to distraction. "There are manuscripts everywhere--nothing but manuscripts," she said in a fine burst of wifely confidence. "Let's hope they don't end up under the cooking pot."

Five Weeks In A Balloon was Jules' first hit. The thing was a smash for his publisher, who sold it all over the world in lavish foreign editions for which Jules received pittances. But Jules wasn't complaining--probably because he wasn't paying attention.

With a firm toehold in the public eye, Jules soon hit his stride as a popular author. He announced to the startled stockbrokers: "Mes enfants, I am leaving you. I have had an idea, the sort of idea that should make a man's fortune. I have just written a novel in a new form, one that's entirely my own. If it succeeds, I shall have stumbled upon a gold mine. In that case, I shall go on writing and writing without pause, while you others go on buying shares the day before they drop and selling them the day before they rise. I am leaving the Bourse. Good evening, mes enfants."

Jules Verne had invented hard science fiction. He originated the hard SF metier of off-the-rack plots and characters, combined with vast expository lumps of pop science. His innovation came from literary naivete; he never learned better or felt any reason to. (This despite Apollinaire's sniping remark: "What a style Jules Verne has, nothing but nouns.")

Verne's dialogue, considered quite snappy for the period, was derived from the stage. His characters constantly strike dramatic poses: Ned Land with harpoon upraised, Phileas Fogg reappearing stage-right in his London club at the last possible tick of the clock. The minor characters--comic Scots, Russians, Jews--are all stage dialect and glued-on beards, instantly recognizable to period readers, yet fresh because of cross-genre effects. They brought a proto- cinematic flash to readers used to the gluey, soulful character studies of, say, Stendhal.

The books we remember, the books determined people still occasionally read, are products of Verne in his thirties and forties. (His first novel was written at thirty-five.) In these early books, flashes of young Jules' student radicalism periodically surface for air, much like the Nautilus. The character of Captain Nemo, for instance, is often linked to novelistic conventions of the Byronic hero. Nemo is, in fact, a democratic terrorist of the period of '48, the year when the working-class flung up Paris barricades, and, during a few weeks of brief civil war, managed to kill off more French army officers than were lost in the entire Napoleonic campaigns. The uprising was squelched, but Jules' generation of Paris '48, like that of May '68, never truly forgot.

Jules did okay by his "new form of the novel." He eventually became quite wealthy, though not through publishing, but the theater. (Nowadays it would be movie rights, but the principle still stands.) Jules, incidently, did not write the stage versions of his own books; they were done by professional theater hacks. Jules knew the plays stank, and that they travestied his books, but they made him a fortune. The theatrical version of his mainstream smash, Michael Strogoff, included such lavish special effects as a live elephant on stage. It was so successful that the term "Strogoff" became contemporary Paris slang for anything wildly bravissimo.

Fortified with fame and money, Jules lunged against the traces. He travelled to America and Scandinavia, faithfully toting his notebooks. He bought three increasingly lavish yachts, and took to sea for days at a time, where he would lie on his stomach scribbling Twenty Thousand Leagues against the deck.

During the height of his popularity, he collected his family and sailed his yacht to North Africa, where he had a grand time and a thrilling brush with guntoting Libyans. On the way back, he toured Italy, where the populace turned out to greet him with fireworks and speeches. In Rome, the Pope received him and praised his books because they weren't smutty. His wife, who was terrified of drowning, refused to get on the boat again, and eventually Verne sold it.

At his wife's insistence, Jules moved to the provincial town of Amiens, where she had relatives. Downstairs, Mme. Verne courted local society in drawing rooms crammed with Second Empire bric-a-brac, while Jules isolated himself upstairs in a spartan study worthy of Nemo, its wall lined with wooden cubbyholes full of carefully labeled index-cards. They slept in eparate bedrooms, and rumor says Jules had a mistress in Paris, where he often vanished for weeks.

Jules' son Michel grew up to be a holy terror, visiting upon Jules all the accumulated karma of his own lack of filial piety. The teenage Michel was in trouble with cops, was confined in an asylum, was even banished onto a naval voyage. Michel ended up producing silent films, not very successfully. Jules' stepdaughters made middle-class marriages and vanished into straitlaced Catholic domesticity, where they cooked up family feuds against their scapegrace half-brother.

Verne's work is marked by an obsession with desert islands. Mysterious Isles, secret hollow volcanoes in the mid-Atlantic, vast ice-floes that crack off and head for the North Pole. Verne never really made it into the bosom of society. He did his best, and played the part whenever onstage, but one senses that he knew somehow that he was Not Like The Others and might be torn to pieces if his facade cracked. One notes his longing for the freedom of empty seas and skies, for a submarine full of books that can sink below storm level into eternal calm, for the hollow shell fired into the pristine unpeopled emptiness of circumlunar space.

From within his index-card lighthouse, the isolation began to tell on the aging Jules. He had now streamlined the production of novels to industrial assembly-work, so much so that lying gossip claimed he used a troop of ghostwriters. He could field-strip a Verne book blindfolded, with a greased slot for every part--the daffy scientist, the comic muscleman or acrobat, the ordinary Joe who asks all the wide-eyed questions, the woman who scarcely exists and is rescued from suttee or sharks or red Indians.

Sometimes the machine is the hero--the steam-driven elephant, the flying war-machine, the gigantic raft--sometimes the geography: caverns, coal-mines, ice-floes, darkest Africa.

Bored, Jules entered politics, and joined the Amiens City Council, where he was quickly shuffled onto the cultural committee. It was a natural sinecure and he did a fair job, getting electric lights installed, widening a few streets, building a municipal theater that everyone admired and no one attended. His book sales slumped steadily. The woods were full of guys writing scientific romances by now--people who actually knew how to write novels, like Herbert Wells. The folk-myth quotes Verne on Wells' First Men In The Moon: "Where is this gravity-repelling metal? Let him show it to me." If not the earliest, it is certainly the most famous exemplar of the hard-SF writer's eternal plaint against the fantasist.

The last years were painful. A deranged nephew shot Verne in the foot, crippling him; it was at this time that he wrote one of his rare late poems, the "Sonnet to Morphine." He was to have a more than nodding acquaintance with this substance, though in those days of children's teething-laudanum no one thought much of it. He died at seventy-seven in the bosom of his vigorously quarrelling family, shriven by the Church. Everyone who had forgotten about him wrote obits saying what a fine fellow he was. This is the Verne everyone thinks that they remember: the greybearded paterfamilias, the conservative Catholic hardware-nut, the guy who made technical forecasts that Really Came True if you squint real hard and ignore most of his work.

Jules Verne never knew he was "inventing science fiction," in the felicitous phrase of Peter Costello's insightful 1978 biography. He knew he was on to something hot, but he stepped onto a commercial treadmill that he didn't understand, and the money and the fame got to him. The early artistic failures, the romantic rejections, had softened him up, and when the public finally Recognized His Genius he was grateful, and fell into line with their wishes.

Jules had rejected respectability early on, when it was offered to him on a plate. But when he had earned it on his own, everyone around him swore that respectability was dandy, and he didn't dare face them down. Wanting the moon, he ended up with a hatch-battened one-man submarine in an upstairs room. Somewhere along the line his goals were lost, and he fell into a role his father might almost have picked for him: a well-to-do provincial city councilman. The garlands disguised the reins, and the streetcorner radical with a headful of visions became a dusty pillar of society.

This is not what the world calls a tragedy; nor is it any small thing to have books in print after 125 years. But the path Young Jules blazed, and the path Old Jules was gently led down, are still well-trampled streets here in SFville. If you stand by his statue at midnight, you can still see Old Jules limping home, over the cobblestones. Or so they say.

CATSCAN 2

"The Spearhead of Cognition"

You're a kid from some podunk burg in Alabama.

From childhood you've been gnawed by vague numinous sensations and a moody sense of your own potential, but you've never pinned it down.

Then one joyful day you discover the work of a couple of writers. They're pretty well-known (for foreigners), so their books are available even in your little town. Their names are "Tolstoy" and "Dostoevsky." Reading them, you realize: This is it! It's the sign you've been waiting for! This is your destiny-- to become a *Russian Novelist*!

Fired with inspiration, you study the pair of 'em up and down, till you figure you've got a solid grasp of what they're up to. You hear they're pretty well-known back in Russia, but to your confident eye they don't seem like so much. (Luckily, thanks to some stunt of genetics, you happen to be a genius.) For you, following their outline seems simple enough--in a more sophisticated vein, of course, and for a modern audience. So you write a few such books, you publish 'em, and people adore them. The folks in 'Bama are fit to bust with pride, and say you've got Tolstoy beat all hollow.

Then, after years of steadily growing success, strange mail arrives. It's from Russia! They've been reading your stuff in translation, and you've been chosen to join the Soviet Writers' Union! Swell! You think. Of course, living in backwoods Alabama, it's been a little tough finding editions of contemporary Russian novelists. But heck, Tolstoy did his writing years ago! By now those Russians must be writing like nobody's business!

Then a shipment of modern Russian novels arrives, a scattering of various stuff that has managed to elude the redtape. You open 'em up and--ohmiGod! It's ... it's COMMUNISM! All this stupid stereotyped garbage! About Red heroes ten feet tall, and sturdy peasants cheering about their tractors, and mothers giving sons to the Fatherland, and fathers giving sons to the Motherland ... Swallowing bile, you pore through a few more at random--oh God, it's awful.

Then the Literary Gazette calls from Moscow, and asks if you'd like to make a few comments about the work of your new comrades. "Why sure!" you drawl helpfully. "It's clear as beer-piss that y'all have gotten onto the wrong track entirely! This isn't literature--this is just a lot of repetitive agitprop crap, dictated by your stupid oppressive publishers! If Tolstoy was alive today, he'd kick your numb Marxist butts! All this lame bullshit about commie heroes storming Berlin and workers breaking production records--those are stupid power-fantasies that wouldn't fool a ten-year-old! You wanna know the true modern potential of Russian novels? Read some of my stuff, if you can do it without your lips moving! Then call me back."

And sure enough, they do call you back. But gosh--some of the hardliners in the Writers' Union have gone and drummed you out of the regiment. Called you all kinds of names ... said you're stuck-up, a tool of capitalism, a no-talent running-dog egghead. After that, you go right on writing, even criticism, sometimes. Of course, after that you start to get MEAN.

This really happened.

Except that it wasn't Tolstoy and Dostoevsky. It was H.G. Wells and Olaf Stapledon. It wasn't Russian novels, it was science fiction, and the Writers' Union was really the SFWA. And Alabama was Poland.

And you were Stanislaw Lem.

Lem was surgically excised from the bosom of American SF back in 1976. Since then plenty of other writers have quit SFWA, but those flung out for the crime of being a commie rat-bastard have remained remarkably few. Lem, of course, has continued to garner widespread acclaim, much of it from hifalutin' mainstream critics who would not be caught dead in a bookstore's skiffy section. Recently a collection of Lem's critical essays, _Macroworlds_, has appeared in paperback. For those of us not privy to the squabble these essays caused in the '70s, it makes some eye-opening reading.

Lem compares himself to Crusoe, stating (accurately) that he had to erect his entire structure of "science fiction" essentially from scratch. He did have the ancient shipwrecked hulls of Wells and Stapledon at hand, but he raided them for tools years ago. (We owe the collected essays to the beachcombing of his Man Friday, Austrian critic Franz Rottensteiner.)

These essays are the work of a lonely man. We can judge the fervor of Lem's attempt to reach out by a piece like "On the Structural Analysis of Science Fiction:" a Pole, writing in German, to an Austrian, about French semantic theory. The mind reels. After this superhuman effort to communicate, you'd think the folks would cut Lem some slack--from pure human pity, if nothing else.

But Lem's ideology--both political and literary--is simply too threatening. The stuff Lem calls science fiction looks a bit like American SF-about the way a dolphin looks like a mosasaur. A certain amount of competitive gnawing and thrashing was inevitable. The water roiled ten years ago, and the judgement of evolution is still out. The smart money might be on Lem. The smarter money yet, on some judicious hybridization. In any case we would do well to try to understand him.

Lem shows little interest in "fiction" per se. He's interested in science: the structure of the world. A brief autobiographical piece, "Reflections on My Life," makes it clear that Lem has been this way from the beginning. The sparkplug of his literary career was not fiction, but his father's medical texts: to little Stanislaw, a magic world of skeletons and severed brains and colorful pickled guts. Lem's earliest "writings," in high school, were not "stories," but an elaborate series of imaginary forged documents: "certificates, passports, diplomas . . . coded proofs and cryptograms . . ."

For Lem, science fiction is a documented form of thought-experiment: a spearhead of cognition. All else is secondary, and it is this singleness o aim that gives his work its driving power. This is truly "a literature of ideas," dismissing the heart as trivial, but piercing the skull like an ice-pick.

Given his predilections, Lem would probably never have written "people stories." But his rationale for avoiding this is astounding. The mass slaughters during the Nazi occupation of Poland, Lem says, drove him to the literary depiction of humanity as a species. "Those days have pulverized and exploded all narrative conventions that had previously been used in literature. The unfathomable futility of human life under the sway of mass murder cannot be conveyed by literary techniques in which individuals or small groups of persons form the core of the narrative."

A horrifying statement, and one that people in happier countries would do well to ponder. The implications of this literary conviction are, of course, extreme. Lem's work is marked by unflinching extremities. He fights through ideas with all the convulsive drive of a drowning man fighting for air. Story structure, plot, human values, characterization, dramatic tension, all are ruthlessly trudgeon-kicked aside.

In criticism, however, Lem has his breath, and can examine the trampled flotsam with a cynical eye. American SF, he says, is hopelessly compromised, because its narrative structure is trash: detective stories, pulp thrillers, fairy-tales, bastardized myths. Such outworn and kitschy devices are totally unsuited to the majestic scale of science fiction's natural thematics, and reduce it to the cheap tricks of a vaudeville conjurer.

Lem holds this in contempt, for he is not a man to find entertainment in sideshow magic. Stanislaw Lem is not a good-time guy. Oddly, for a science fiction writer, he seems to have very little interest in the intrinsically weird. He shows no natural appetite for the arcane, the offbeat, the outre.. He is colorblind to fantasy. This leads him to dismiss much of the work of Borges, for example. Lem claims that "Borges' best stories are constructed as tightly as mathematical proofs." This is a tautology of taste, for, to Lem, mathematical proofs are the conditions to which the "best" stories must necessarily aspire.

In a footnote to the Borges essay Lem makes the odd claim that "As soon as nobody assents to it, a philosophy becomes automatically fantastic literature." Lem's literature *is* philosophy; to veer from the path of reason for the sake of mere sensation is fraudulent.

American SF, therefore, is a tissue of frauds, and its practicioners fools at best, but mostly snake-oil salesmen. Lem's stern puritanism, however, leaves him at sea when it comes to the work of Philip K. Dick: "A Visionary Among the Charlatans." Lem's mind was clearly blown by reading Dick, and he struggles to find some underlying weltanschauung that would reduce Dick's ontological raving to a coherent floor-plan. It's a doomed effort, full of condescension and confusion, like a ballet-master analyzing James Brown.

Fiction is written to charm, to entertain, to enlighten, to convey cultural values, to analyze life and manners and morals and the nature of the human heart. The stuff Stanislaw Lem writes, however, is created to burn mental holes with pitiless coherent light. How can one do this and still produce a product resembling "literature?" Lem tried novels. Novels, alas, look odd without genuine characters in them. Then he hit on it: a stroke of genius.

The collections A Perfect Vacuum and Imaginary Magnitudes are Lem's masterworks. The first contains book reviews, the second, introductions to various learned tomes. The "books" discussed or reviewed do not actually exist, and have archly humorous titles, like "Necrobes" by "Cezary Strzybisz." But here Lem has found literary structures--not "stories"--but assemblages of prose, familiar and comfortable to the reader.

Of course, it takes a certain aridity of taste to read a book composed of "introductions," traditionally a kind of flaky appetizer before the main course. But it's worth it for the author's sense of freedom, his manifest delight in finally ridding himself of that thorny fictive thicket that stands between him and his Grail. These are charming pieces, witty, ingenious, highly thought-provoking, utterly devoid of human interest. People will be reading these for decades to come. Not because they work as fiction, but because their form follows function with the sinister elegance of an automatic rifle.

Here Lem has finessed an irrevocable choice. It is a choice every science fiction writer faces. Is the writer to write Real Novels which "only happen to be" science fiction--or create knobby and irreducible SF artifacts which are not true "stories," but visionary texts? The argument in favor of the first course is that Real Readers, i.e. mainstream ones, refuse to notice the nakedly science-fictional. How Lem must chuckle as he collects his lavish blurbs from Time and Newsweek (not to mention an income ranking as one of poor wretched Poland's best sources of foreign exchange) . By disguising his work as the haute-lit exudations of a critic, he has out-conjured the Yankee conjurers, had his cake and eaten it publicly, in the hallowed pages of the NY Review of Books.

It's a good trick, hard to pull off, requiring ideas that burn so brilliantly that their glare is overwhelming. That ability alone is worthy of a certain writhing envy from the local Writers' Union. But it's still a trick, and the central question is still unresolved. What is "science fiction," anyway?

And what's it there for?

CATSCAN 3

"Updike's Version"

John Updike has got to be the epitome of everything that SF readers love to hate. Those slim, clever, etiolated mainstream novels about well-to-do New Yorker subscribers, who sip white wine and contemplate adultery ... Novels stuffed like Christmas geese with hi-falutin' literary values ...

Mention Updike at a SFWA gig, and you get yawns, shudders, shakings of the head ... His work affects science fiction writers like cayenne pepper affects a pack of bloodhounds.

Why? Because John Updike has everything SF writers don't. He is, in some very real sense, everything SF writers aren't.

Certain qualities exist, that novelists are popularly supposed to possess. Gifts, abilities, that win An Author respect, that cause folks to back off and gape just a bit if they find one in a grocery line. Qualities like: insight into modern culture. A broad sympathy for the manifold quirks of human nature. A sharp eye for the defining detail. A quick ear for language. A mastery of prose.

John Updike possesses these things. He is erudite. He has, for instance, actually read Isak Dinesen, Wallace Stevens, Ciline, Jean Rhys, Gunter Grass, Nabokov and Bellow. Not only has he read these obscure and intimidating people, but he has publicly discussed the experience with every sign of genuine enjoyment.

Updike is also enormously clever, clever to a point that approaches genius through the sheer irrepressible business of its dexterity. Updike's paragraphs are so brittle, so neatly nested in their comma'ed clauses, that they seem to burst under the impact of the reader's gaze, like hyper-flaky croissants.

Updike sees how things look, notices how people dress, hears how people talk. His eye for the telling detail can make even golf and birdwatching, the ultimate yawnable whitebread Anglo pastimes, more or less interesting. (Okay--not very interesting, granted. But interesting for the sheer grace of Updike's narrative technique. Like watching Fred Astaire take out the garbage.)

It would be enlightening to compare John Updike to some paragon of science fiction writing. Unfortunately no such paladin offers himself, so we'll have to make do with a composite.

What qualities make a great science fiction writer? Let's look at it objectively, putting aside all that comfortable bullshit about the virtues authors are supposed to have. Let's look at the science fiction writer as he is.

Modern culture, for instance. Our SF paladin is not even sure it exists, except as a vaguely oppressive force he's evaded since childhood. He lives in his own one-man splinter culture, and has ever since that crucial time in childhood--when he was sick in bed for two years, or was held captive in the Japanese prison camp, or lived in the Comoros Islands with monstrous parents who were nuts on anthropology or astronomy or Trotsky or religion.

He's pretty much okay now, though, our science fiction author. He can feed himself and sign checks, and he makes occasional supply trips into the cultural anchorage of SF fandom, where he refreshes his soul by looking at people far worse off than he is. But he dresses funny, and mumbles to himself in the grocery line.

While standing there, he doesn't listen to the other folks and make surreptitious authorly notes about dialogue. Far from it: he's too full of unholy fire to pay much attention to mere human beings. And anyway, his characters generally talk about stuff like neutrinos or Taoism.

His eyes are glazed, cut off at the optic nerve while he watches brain-movies. Too many nights in too many cheap con hotels have blunted his sense of aesthetics; his characters live in geodomes or efficiencies or yurts. They wear one-piece jumpsuits because jumpsuits make people one monotonous color from throat to foot, which allows our attention to return to the neutrinos--of which, incidentally, ninety percent of the universe consists, so that the entire visible world of matter is a mere *froth*, if we only knew.

But he's learned his craft, our science fiction paladin. The real nutcases don't have enough mental horsepower to go where he's gone. He works hard and he thinks hard and he knows what he's doing. He's read Kuttner and Kornbluth and Blish and Knight, and he knows how to Develop an Idea entertainingly and rigorously, and how to keep pages turning meanwhile, and by Christ those are no easy things. So there, Mr. John Updike with your highflown talk of aht and beautieh. That may be okay for you Ivy League pinky-lifters with your sissy bemoaning about the Crisis of Culture ... As if there was going to be a culture after the millennial advent of (Biotech) (Cybernetics) (Space Travel) (Robots) (Atomic Energy) (General Semantics) (Dean Drive) (Dianetics) ...

So--there's the difference. It exists, for better or worse. None of this is lost on John Updike. He knows about science fiction, not a hell of a lot, but probably vastly more than most science fiction writers know about John Updike. He recognizes that it requires specialized expertise to write good SF, and that there are vast rustling crowds of us on the other side of the cultural spacewarp, writing for Ace Books and Amazing Stories. Updike reads Vonnegut and Le Guin and Calvino and Lem and Wells and Borges, and would probably read anybody else whose prose didn't cause him physical pain. And from this reading, he knows that the worldview is different in SFville ... that writers think literature, and that SF writers think SF.

And he knows, too, that it's not T.S. Eliot's world any more, if indeed it ever was T.S. Eliot's world. He knows we live in a world that loves to think SF, and has thought SF ever since Hiroshima, which was the ne plus ultra of Millennial Technological Advents, which really and truly did change the world forever.

So Updike has rolled up his pinstriped sleeves and bent his formidable intelligence in our direction, and lo we have a science fiction novel, Roger's Version by John Updike. Of course it's not *called* a science fiction novel. Updike has seen Le Guin and Lem and Vonnegut crawl through the spacewarp into his world. He's seen them wriggle out, somehow, barely, gasping and stinking of rocket fuel. Updike has no reason to place himself in a position they went to great pains to escape. But _Roger's Version_ does feature a computer on its cover, if not a rocketship or a babe in a bubble helmet, and by heaven it is a science fiction novel--and a very good one.

Roger's Version is Updike's version of what SF should be on about. It deals with SF's native conceptual underpinnings: the impact of technology on society. The book is about technolatry, about millennial visionary thinking. This is SF-think as examined by a classic devotee of lit-think.

It's all there, quite upfront and nakedly science fictional. It puzzles mainstream commentators. "It's as though Updike had challenged himself to convert into the flow of his novel the most resistant stuff he could think of," marvels the Christian Science Monitor, alarmed to find a Real Novel that actually deals straightforwardly with real ideas. "The aggressiveness of Updike's imagination is often a marvel," says People, a mag whose utter lack of imagination is probably its premier selling point.

And look at this list of author's credits: Fred Hoyle, Martin Gardner, Gerald Feinberg, Robert Jastrow. Don't tell me Updike's taken the *science* seriously. But he has--he's not the man to deny the devil his due, especially after writing Witches of Eastwick, which would have been called a fantasy novel if it had been written badly by a nobody.

But enough of this high-flown abstraction--let's get to grips with the book. There's these two guys, see. There's Roger Lambert, a middle-aged professor of theology, a white-wine-sipping adultery-contemplating intellectual New Englander who probably isn't eighty light-years removed from John Updike. Roger's a nasty piece of business, mostly, lecherous, dishonest and petty-minded, and obsessed with a kind of free-floating Hawthornian Protestant guilt that has been passed down for twenty generations up Boston way and hasn't gotten a bit more specific in the meantime.

And then there's Roger Lambert's antagonist, Dale Kohler. Dale's a young computer hacker with pimples and an obnoxious cocksure attitude. If Dale were just a little more hip about it, he'd be a cyberpunk, but for thematic reasons Updike chose to make Dale a born-again Christian. We never really believe this, though, because Dale almost never talks Jesus. He talks AND-OR circuits, and megabytes, and Mandelbrot sets, with all the techspeak fluency Updike can manage, which is considerable. Dale talks God on a microchip, technological transcendence, and he was last seen in Greg Bear's Blood Music where his name was different but his motive and character were identical. Dale is a type. Not just a science fictional type, but the type that *creates* science fiction, who talks God for the same reason Philip K. Dick talked God. Because it comes with the territory.

Oh yeah, and then we've got some women. They don't amount to much. They're not people, exactly. They're temptresses and symbols.

There's Roger Lambert's wife, Esther, for instance. Esther ends up teaching Dale Kohler the nature of sin, which utterly destroys Dale's annoying moral certitude, and high time, too. Esther does this by the simple expedient of adulterously fucking Dale's brains out, repeatedly and in meticulously related detail, until Dale collapses from sheer weight of original sin.

A good trick. But Esther breezes through this inferno of deviate carnality, none the worse for the experience; invigorated, if anything. Updike tells us an old tale in this: that women *are* sexuality, vast unplumbed cisterns of it, creatures of mystery, vamps of the carnal abyss. I just can't bring myself to go for this notion, even if the Bible tells me so. I know that women don't believe this stuff.

Then there's Roger Lambert's niece, Verna. I suspect she represents the Future, or at least the future of America. Verna's a sad case. She lives on welfare with her illegitimate mulatto kid, a little girl who is Futurity even more incarnate. Verna listens to pop music, brain-damaging volumes of it. She's cruel and stupid, and as corrupt as her limited sophistication allows. She's careless of herself and others, exults in her degradation, whores sometimes when she needs the cocaine money. During the book's crisis, she breaks her kid's leg in a reckless fit of temper.

A woman reading this portrayal would be naturally enraged, reacting under the assumption that Updike intends us to believe in Verna as an actual human being. But Verna, being a woman, isn't. Verna is America, instead: dreadfully hurt and spiritually degraded, cheapened, teasing, but full of vitality, and not without some slim hope of redemption, if she works hard and does what's best for her (as defined by Roger Lambert). Also, Verna possesses the magic of fertility, and nourishes the future, the little girl Paula. Paula, interestingly, is every single thing that Roger Lambert isn't, i.e. young, innocent, trusting, beautiful, charming, lively, female and not white.

Roger sleeps with Verna. We've seen it coming for some time. It is, of course, an act of adultery and incest, compounded by Roger's complicity in child abuse, quite a foul thing really, and narrated with a certain gloating precision that fills one with real unease. But it's Updike's symbolic gesture of cultural rapprochement. "It's helped get me ready for death," Roger tells Verna afterward. Then: "Promise me you won't sleep with Dale." And Verna laughs at the idea, and tells him: "Dale's a non-turnon. He's not even evil, like you." And gives Roger the kiss of peace.

So, Roger wins, sort of. He is, of course, aging rapidly, and he knows his cultural values don't cut it any more, that maybe they never cut it, and in any case he is a civilized anachronism surrounded by a popcultural conspiracy of vile and rising noise. But at least *Dale* doesn't win. Dale, who lacks moral complexity and a proper grasp of the true morbidity of the human condition, thinks God can be found in a computer, and is properly nemesized for his hubris. The future may be fucked, but at least Dale won't be doing it.

So it goes, in Roger's Version. It's a good book, a disturbing book. It makes you think. And it's got an edge on it, a certain grimness and virulence of tone that some idiot would probably call "cyberpunk" if Updike were not writing about the midlife crisis of a theology professor.

Roger's Version is one long debate, between Updike's Protestantism and the techno-zeitgeist of the '80s. With great skill, Updike parallels the arcanity of cyberdom and the equally arcane roots of Christian theology. It's good; it's clever and funny; it verges on the profound. The far reaches of modern computer science--chaos theory, fractals, simulationism, statistical physics and so on--are indeed theological in their implications. Some of their spokesmen have a certain evangelical righteousness of tone that could only alarm a cultural arbiter like John Updike. There are indeed heretic gospels inside that machine, just like there were gospels in a tab of LSD, only more so. And it's a legitimate writerly task to inquire about those gospels and wonder if they're any better than the old one.

So John Updike has listened, listened very carefully and learned a great deal, which he parades deftly for his readership, in neatly tended flashes of hard-science exposition. And he says: I've heard it before, and I may not exactly believe in that Old Rugged Cross, but I'm damned if I'll believe these crazy hacker twerps with their jogging shoes.

There's a lot to learn from this book. It deals with the entirety of our zeitgeist with a broad-scale vision that we SF types too often fail to achieve.

It's an interesting debate, though not exactly fair: it's muddied with hatred and smoldering jealousy, and a very real resentment, and a kind of self-loathing that's painful to watch.

And it's a cheat, because Dale's "science" has no real intellectual validity. When you strip away the layers of Updike's cyber-jargon, Dale's efforts are only numerology, the rankest kind of dumb superstition. "Science" it's not. It's not even good theology. It's heretic voodoo, and its pre-arranged failure within this book proves nothing about anything.

Updike is wrong. He clings to a rotting cultural fabric that he knows is based on falsehoods, and rejects challenges to that fabric by declaring "well you're another." But science, true science, does learn from mistakes; theologians like Roger Lambert merely further complicate their own mistaken premises.

I remain unconvinced, though not unmoved, by Updike's object lesson. His book has hit hard at my own thinking, which, like that of most SF writers, is overly enamored of the millennial and transcendent. I know that the twentieth century's efforts to kick Updike's Judaeo-Christian WestCiv values have been grim: Stalin's industrial terror, Cambodia's sickening Luddite madness, the convulsions today in Islam ... it was all "Year Zero" stuff, attempts to sweep the board clean, that merely swept away human sanity, instead. Nor do I claim that the squalid consumerism of today's "secular-Humanist" welfare states is a proper vision for society.

But I can't endure the sheer snobbish falseness of Updike's New England Protestantism. Never mind that it's the legacy of American letters, that it's the grand tradition of Hawthorne and Melville, that it's what made America great. It's a shuck, ladies and gentlemen. It won't wash. It doesn't own the future; it won't even kiss the future goodbye on its way to the graveyard. It doesn't own our minds any more.

We don't live in an age of answers, but an age of f 828c28i erment. And today that ferment is reflected faithfully in a literature called science fiction.

SF may be crazy, it may be dangerous, it may be shallow and cocksure, and it should learn better. But in some very real way it is truer to itself, truer to the world, than is the writing of John Updike.

This is what has drawn Updike, almost despite himself, into science fiction's cultural territory. For SF writers, his novel is a lesson and a challenge. A lesson that must be learned and a challenge that must be met.

CATSCAN 4

"The Agberg Ideology"

To speak with precision about the fantastic is like loading mercury with a pitchfork. Yet some are driven to confront this challenge. On occasion, a veteran SF writer will seriously and directly discuss the craft of writing science fiction.

A few have risked doing this in cold print. Damon Knight, for instance. James Blish (under a pseudonym.) Now Robert Silverberg steps deliberately into their shoes, with Robert Silverberg's Worlds of Wonder: Exploring the Craft of Science Fiction (Warner Books, 1987, $17.95).

Here are thirteen classic SF stories by well-known genre authors. Most first appeared in genre magazines during the 1950s. These are stories which impressed Silverberg mightily as he began his career. They are stories whose values he tried hard to understand and assimilate. Each story is followed by Silverberg's careful, analytical notes.

And this stuff, ladies and gents, is the SF McCoy. It's all shirtsleeve, street-level science fiction; every story in here is thoroughly crash-tested and cruises like a vintage Chevy.

Worlds of Wonder is remarkable for its sober lack of pretension. There's no high-tone guff here about how SF should claim royal descent from Lucian, or Cyrano de Bergerac, or Mary Shelley. Credit is given where credit is due. The genre's real founders were twentieth-century weirdos, whacking away at their manual typewriters, with amazing persistence and energy, for sweatshop pay. They had a definite commonality of interest. Something more than a mere professional fraternity. Kind of like a disease.

In a long, revelatory introduction, Silverberg describes his own first exposure to the vectors of the cultural virus: SF books.

"I think I was eleven, maybe twelve ... [The] impact on me was overwhelming. I can still taste and feel the extraordinary sensations they awakened in me: it was a physiological thing, a distinct excitement, a certain metabolic quickening at the mere thought of handling them, let alone reading them. It must be like that for every new reader--apocalyptic thunderbolts and eerie unfamiliar music accompany you as you lurch and stagger, awed and shaken, into a bewildering new world of ideas and images, which is exactly the place you've been hoping to find all your life."

If this paragraph speaks to your very soul with the tongue of angels, then you need this anthology. Buy it immediately, read it carefully. It's full of home truths you won't find anywhere else.

This book is Silverberg's vicarious gift to his younger self, the teenager described in his autobiographical introduction: an itchy, over-bright kid, filled with the feverish conviction that to become a Science Fiction Writer must surely be the moral pinnacle of the human condition.

And Silverberg knows very well that the kids are still out there, and that the virus still spreads. He can feel their hot little hands reaching out plaintively in the dark. And he's willing, with a very genuine magnanimity, to help these sufferers out. Just as he himself was helped by an earlier SF generation, by Mr. Kornbluth, and Mr. Knight, and Mr. and Mrs. Kuttner, and all those other rad folks with names full of consonants.

Silverberg explains his motives clearly, early on. Then he discusses his qualifications to teach the SF craft. He mentions his many awards, his fine reviews, his length of service in the SF field, and, especially, his success at earning a living. It's a very down-home, pragmatic argument, with an aw-shucks, workin'-guy, just-folks attitude very typical of the American SF milieu. Silverberg doesn't claim superior knowledge of writerly principle (as he might well). He doesn't openly pose as a theorist or ideologue, but as a modest craftsman, offering rules of thumb.

I certainly don't scorn this offer, but I do wonder at it. Such modesty may well seem laudable, but its unspoken implications are unsettling. It seems to show an unwillingness to tackle SF's basic roots, to establish a solid conceptual grounding. SF remains pitchforked mercury, jelly nailed to a tree; there are ways to strain a living out of this ichor, but very few solid islands of theory.

Silverberg's proffered definition of science fiction shows the gooeyness immediately. The definition is rather long, and comes in four points:

1. An underlying speculative concept, systematically developed in a way that amounts to an exploration of the consequences of allowing such a departure from known reality to impinge on the universe as we know it.

2. An awareness by the writer of the structural underpinnings (the "body of scientific knowledge") of our known reality, as it is currently understood, so that the speculative aspects of the story are founded on conscious and thoughtful departures from those underpinnings rather than on blithe ignorance.

3. Imposition by the writer of a sense of limitations somewhere in the assumptions of the story ...

4. A subliminal knowledge of the feel and texture of true science fiction, as defined in a circular and subjective way from long acquaintance with it.

SF is notoriously hard to define, and this attempt seems about as good as anyone else's, so far.

Hard thinking went into it, and it deserves attention. Yet point four is pure tautology. It is the Damon Knight dictum of "SF is what I point at when I say "SF,'" which is very true indeed. But this can't conceal deep conceptual difficulties.

Here is Silverberg defining a "Story." "A story is a machine that enlightens: a little ticking contrivance ... It is a pocket universe ... It is an exercise in vicarious experience ... It is a ritual of exorcism and purgation. It is a set of patterns and formulas. It is a verbal object, an incantation made up of rhythms and sounds."

Very fluent, very nice. But: "A science fiction story is all those things at once, and something more." Oh? What is this "something more?" And why does it take second billing to the standard functions of a generalized "story?"

How can we be certain that "SF" is not, in fact, something basically alien to "Story-telling?" "Science fiction is a branch of fantasy," Silverberg asserts, finding us a cozy spot under the sheltering tree of Literature. Yet how do we really know that SF is a "branch" at all?

The alternative would be to state that science fiction is not a true kind of "fiction" at all, but something genuinely monstrous. Something that limps and heaves and convulses, without real antecedents, in a conceptual no-man's land. Silverberg would not like to think this; but he never genuinely refutes it.

Yet there is striking evidence of it, even in Worlds of Wonder itself. Silverberg refers to "antediluvian SF magazines, such as Science Wonder Stories from 1929 and Amazing Stories from 1932 ... The primitive technique of many of the authors didn't include such frills as the ability to create characters or write dialogue ... [T]he editors of the early science fiction magazines had found it necessary to rely on hobbyists with humpty-dumpty narrative skills; the true storytellers were off writing for the other pulp magazines, knocking out westerns or adventure tales with half the effort for twice the pay."

A nicely dismissive turn of phrase. But notice how we confront, even in very early genre history, two distinct castes of writer. We have the "real storytellers," pulling down heavy bread writing westerns, and "humpty-dumpty hobbyists" writing this weird-ass stuff that doesn't even have real dialogue in it. A further impudent question suggests itself: if these "storytellers" were so "real," how come they're not still writing successfully today for Argosy and Spicy Stories and Aryan Atrocity Adventure? How come, among the former plethora of pulp fiction magazines, the science fiction zines still survive? Did the "storytellers" somehow ride in off the range to rescue Humpty Dumpty? If so, why couldn't they protect their own herd?

What does "science fiction" really owe to "fiction," anyway? This conceptual difficulty will simply not go away, ladies and gentlemen. It is a cognitive dissonance at the heart of our genre. Here is John Kessel, suffering the ideological itch, Eighties version, in SF Eye #1:

"Plot, character and style are not mere icing ... Any fiction that conceives of itself as a vehicle for something called `ideas' that can be inserted into and taken out of the story like a passenger in a Toyota is doomed, in my perhaps staid and outmoded opinion, to a very low level of achievement."

A "low level of achievement." Not even Humpty Dumpty really wants this. But what is the "passenger," and what are the "frills?" Is it the "storytelling," or is it the "something more?" Kessel hits a nerve when he demands, "What do you mean by an `idea' anyway?" What a difficult question this is!

The craft of storytelling has been explored for many centuries, in many cultures. Blish called it "a huge body of available technique," and angrily demanded its full use within SF. And in Worlds of Wonder, Silverberg does his level best lo convey the basic mechanics. Definitions fly, helpful hints abound. A story is "the working out of a conflict." A story "has to be built around a pattern of oppositions." Storytelling can be summed up in a three-word formula: "purpose, passion, perception." And on and on.

But where are we to find the craft of the "something more"? What in hell *is* the "something more"? "Ideas" hardly begins to describe it. Is it "wonder"? Is it "transcendence"? Is it "visionary drive," or "conceptual novelty," or even "cosmic fear"? Here is Silverberg, at the very end of his book:

"It was that exhilaration and excitement that drew us to science fiction in the first place, almost invariably when we were very young; it was for the sake of that exhilaration and excitement that we took up the writing of it, and it was to facilitate the expression of our visions and fantasies that we devoted ourselves with such zeal to the study of the art and craft of writing."

Very well put, but the dichotomy lurches up again. The art and craft of writing *what*, exactly? In this paragraph, the "visions and fantasies" briefly seize the driver's seat of the Kessel Toyota. But they soon dissipate into phantoms again. Because they are so ill-defined, so mercurial, so desperately lacking in basic conceptual soundness. They are our stock in trade, our raison d'etre, and we still don't know what to make of them.

Worlds of Wonder may well be the best book ever published about the craft of science fiction. Silverberg works nobly, and he deserves great credit. The unspoken pain that lies beneath the surface of his book is something with which the genre has never successfully come to terms. The argument is as fresh today as it was in the days of Science Wonder Stories.

This conflict goes very deep indeed. It is not a problem confined to the craft of writing SF. It seems to me to be a schism of the modern Western mindset, a basic lack of cultural integration between what we feel, and what we know. It is an inability to speak naturally, with conviction from the heart, of the things that Western rationality has taught us. This is a profound problem, and the fact that science fiction deals with it so directly, is a sign of science fiction's cultural importance.

We have no guarantee that this conflict will *ever* be resolved. It may not be resolvable. SF writers have begun careers, succeeded greatly, grown old and honored, and died in the shadow of this dissonance. We may forever have SF "stories" whose narrative structure is buboed with expository lumps. We may always have escapist pulp adventures that avoid true imagination, substituting the bogus exoticism that Blish defined as "calling a rabbit a `smeerp.'"

We may even have beautifully written, deeply moving tales of classic human conflict--with only a reluctant dab of genre flavor. Or we may have the opposite: the legacy of Stapledon, Gernsback, and Lem, those non-stories bereft of emotional impact and human interest, the constructions Silverberg rightly calls "vignettes" and "reports."

I don't see any stories in Worlds of Wonder that resolve this dichotomy. They're swell stories, and they deliver the genre payoff in full. But many of them contradict Silverberg's most basic assertions about "storytelling." "Four in One" by Damon Knight is a political parable whose hero is a rock-ribbed Competent Man whose reactions are utterly nonhuman. "Fondly Fahrenheit" by Alfred Bester is a one-shot tour-de-force dependent on weird grammatical manipulation. "Hothouse" by Brian Aldiss is a visionary picaresque with almost no conventional structure. "The New Prime" by Jack Vance is six jampacked alien vignettes very loosely stitched together. "Day Million" showcases Frederik Pohl bluntly haranguing his readers. It's as if Silverberg picked these stories deliberately to demonstrate a deep distrust of his own advice.

But to learn to tell "good stories" is excellent advice for any kind of writer, isn't it? Well-constructed "stories" will certainly sell in science fiction. They will win awards, and bring whatever fame and wealth is locally available. Silverberg knows this is true. His own career proves it. His work possesses great technical facility. He writes stories with compelling opening hooks, with no extraneous detail, with paragraphs that mesh, with dialogue that advances the plot, with neatly balanced beginnings, middles and ends.

And yet, this ability has not been a total Royal Road to success for him. Tactfully perhaps, but rather surprisingly, Worlds of Wonder does not mention Silverberg's four-year "retirement" from SF during the '70s. For those who missed it, there was a dust-up in 1976, when Silverberg publicly complained that his work in SF was not garnering the critical acclaim that its manifest virtues deserved. These were the days of Dying Inside, The Book of Skulls, Shadrach in the Furnace--sophisticated novels with deep, intense character studies, of unimpeachable literary merit. Silverberg was not alone in his conclusion that these groundbreaking works were pearls cast before swine. Those who shared Silverberg's literary convictions could only regard the tepid response of the SF public as philistinism.

But was it really? Critics still complain at him today; take Geoff Ryman's review of The Conglomeroid Cocktail Party, a recent Silverberg collection, in Foundation 37. "He is determined to write beautifully and does ... He has most of the field beaten by an Olympic mile." And yet: "As practiced by Silverberg, SF is a minor art form, like some kinds of verse, to be admired for its surface polish and adherence to form."

This critical plaint is a symptom of hunger for the "something more." But where are we to find its mercurial secrets? Not in the storytelling alembics of Worlds of Wonder.

Why, then, is Silverberg's book so very valuable to the SF writer of ambition? There are many reasons. Silverberg's candid reminiscences casts vital light into the social history of the genre. The deep structures of our subculture, of our traditions, must be understood by anyone who wants to transcend them. To have no "ideology," no theory of SF and its larger purposes, is to be the unknowing puppet of its unwritten rules. These invisible traditions are actually only older theories, now disguised as common sense.

The same goes for traditional story values. Blatant solecisms are the Achilles heel of the wild-eyed SF visionary. If this collection teaches anything, it's that one can pull the weirdest, wackiest, off-the-wall moves in SF, and still win big. But one must do this deliberately, with a real understanding of thee consequences. One must learn to recognize, and avoid, the elementary blunders of bad fiction: the saidbookisms, the point-of-view violations, the careless lapses of logic, the pointless digressions, the idiot plots, the insulting cliches of character. Worlds of Wonder is a handbook for accomplishing that. It's kindly and avuncular and accessible and fun to read.

And some readers are in special luck. You may be one of them. You may be a young Robert Silverberg, a mindblown, too-smart kid, dying to do to the innocent what past SF writers have done to you. You may be boiling over with the Holy Spirit, yet wondering how you will ever find the knack, the discipline, to put your thoughts into a form that compels attention from an audience, a form that will break you into print. If you are this person, Worlds of Wonder is a precious gift. It is your battle plan.

CATSCAN 5

"Slipstream"

In a recent remarkable interview in _New Pathways_ #11, Carter Scholz alludes with pained resignation to the ongoing brain-death of science fiction. In the 60s and 70s, Scholz opines, SF had a chance to become a worthy literature; now that chance has passed. Why? Because other writers have now learned to adapt SF's best techniques to their own ends.

"And," says Scholz, "They make us look sick. When I think of the best `speculative fiction' of the past few years, I sure don't think of any Hugo or Nebula winners. I think of Margaret Atwood's _The Handmaid's Tale_, and of Don DeLillo's _White Noise_, and of Batchelor's _The Birth of the People's Republic of Antarctica_, and of Gaddis' _JR_ and _Carpenter's Gothic_, and of Coetzee's _Life and Times of Michael K_ . . . I have no hope at all that genre science fiction can ever again have any literary significance. But that's okay, because now there are other people doing our job."

It's hard to stop quoting this interview. All interviews should be this good. There's some great campy guff about the agonizing pain it takes to write short stories; and a lecture on the unspeakable horror of writer's block; and some nifty fusillades of forthright personal abuse; and a lot of other stuff that is making _New Pathways_ one of the most interesting zines of the Eighties. Scholz even reveals his use of the Fibonacci Sequence in setting the length and number of the chapters in his novel _Palimpsests_, and wonders how come nobody caught on to this groundbreaking technique of his.

Maybe some of this peripheral stuff kinda dulls the lucid gleam of his argument. But you don't have to be a medieval Italian mathematician to smell the reek of decay in modern SF. Scholz is right. The job isn't being done here.

"Science Fiction" today is a lot like the contemporary Soviet Union; the sprawling possessor of a dream that failed. Science fiction's official dogma, which almost everybody ignores, is based on attitudes toward science and technology which are bankrupt and increasingly divorced from any kind of reality. "Hard-SF," the genre's ideological core, is a joke today; in terms of the social realities of high-tech post-industrialism, it's about as relevant as hard-Leninism.

Many of the best new SF writers seem openly ashamed of their backward Skiffy nationality. "Ask not what you can do for science fiction--ask how you can edge away from it and still get paid there."

A blithely stateless cosmopolitanism is the order of the day, even for an accredited Clarion grad like Pat Murphy: "I'm not going to bother what camp things fall into," she declares in a recent _Locus_ interview. "I'm going to write the book I want and see what happens . . . If the markets run together, I leave it to the critics." For Murphy, genre is a dead issue, and she serenely wills the trash-mountain to come to Mohammed.

And one has to sympathize. At one time, in its clumsy way, Science Fiction offered some kind of coherent social vision. SF may have been gaudy and naive, and possessed by half-baked fantasies of power and wish-fulfillment, but at least SF spoke a contemporary language. Science Fiction did the job of describing, in some eldritch way, what was actually *happening*, at least in the popular imagination. Maybe it wasn't for everybody, but if you were a bright, unfastidious sort, you could read SF and feel, in some satisfying and deeply unconscious way, that you'd been given a real grip on the chrome-plated handles of the Atomic Age.

But *now* look at it. Consider the repulsive ghastliness of the SF category's Lovecraftian inbreeding. People retched in the 60s when De Camp and Carter skinned the corpse of Robert E. Howard for its hide and tallow, but nowadays necrophilia is run on an industrial basis. Shared-world anthologies. Braided meganovels. Role-playing tie-ins. Sharecropping books written by pip-squeaks under the blazoned name of established authors. Sequels of sequels, trilogy sequels of yet-earlier trilogies, themselves cut-and-pasted from yet-earlier trilogies. What's the common thread here? The belittlement of individual creativity, and the triumph of anonymous product. It's like some Barthesian nightmare of the Death of the Author and his replacement by "text."

Science Fiction--much like that other former Vanguard of Progressive Mankind, the Communist Party--has lost touch with its cultural reasons for being. Instead, SF has become a self-perpetuating commercial power-structure, which happens to be in possession of a traditional national territory: a portion of bookstore rackspace.

Science fiction habitually ignores any challenge from outside. It is protected by the Iron Curtain of category marketing. It does not even have to improve "on its own terms," because its own terms no longer mean anything; they are rarely even seriously discussed. It is enough merely to point at therackspace and say "SF."

Some people think it's great to have a genre which has no inner identity, merely a locale where it's sold. In theory, this grants vast authorial freedom, but the longterm practical effect has been heavily debilitating. When "anything is possible in SF" then "anything" seems good enough to pass muster. Why innovate? Innovate in what direction? Nothing is moving, the compass is dead. Everything is becalmed; toss a chip overboard to test the current, and it sits there till it sinks without a trace.

It's time to clarify some terms in this essay, terms which I owe to Carter Scholz. "Category" is a marketing term, denoting rackspace. "Genre" is a spectrum of work united by an inner identity, a coherent esthetic, a set of conceptual guidelines, an ideology if you will.

"Category" is commercially useful, but can be ultimately deadening. "Genre," however, is powerful.

Having made this distinction, I want to describe what seems to me to be a new, emergent "genre," which has not yet become a "category."

This genre is not "category" SF; it is not even "genre" SF. Instead, it is a contemporary kind of writing which has set its face against consensus reality. It is a fantastic, surreal sometimes, speculative on occasion, but not rigorously so. It does not aim to provoke a "sense of wonder" or to systematically extrapolate in the manner of classic science fiction.

Instead, this is a kind of writing which simply makes you feel very strange; the way that living in the late twentieth century makes you feel, if you are a person of a certain sensibility. We could call this kind of fiction Novels of Postmodern Sensibility, but that looks pretty bad on a category rack, and requires an acronym besides; so for the sake of convenience and argument, we will call these books "slipstream."

"Slipstream" is not all that catchy a term, and if this young genre ever becomes an actual category I doubt it will use that name, which I just coined along with my friend Richard Dorsett. "Slipstream" is a parody of "mainstream," and nobody calls mainstream "mainstream" except for us skiffy trolls.

Nor is it at all likely that slipstream will actually become a full-fledged genre, much less a commercially successful category. The odds against it are stiff. Slipstream authors must work outside the cozy infrastructure of genre magazines, specialized genre criticism, and the authorial esprit-de-corps of a common genre cause.

And vast dim marketing forces militate against the commercial success of slipstream. It is very difficult for these books to reach or build their own native audience, because they are needles in a vast moldering haystack. There is no convenient way for would-be slipstream readers to move naturally from one such work to another of its ilk. These books vanish like drops of ink in a bucket of drool.

Occasional writers will triumph against all these odds, but their success remains limited by the present category structures. They may eke out a fringe following, but they fall between two stools. Their work is too weird for Joe and Jane Normal. And they lose the SF readers, who avoid the mainstream racks because the stuff there ain't half weird enough. (One result of this is that many slipstream books are left-handed works by authors safely established in other genres.)

And it may well be argued that slipstream has no "real" genre identity at all. Slipstream might seem to be an artificial construct, a mere grab-bag of mainstream books that happen to hold some interest for SF readers. I happen to believe that slipstream books have at least as much genre identity as the variegated stock that passes for "science fiction" these days, but I admit the force of the argument. As an SF critic, I may well be blindered by my parochial point-of-view. But I'm far from alone in this situation. Once the notion of slipstream is vaguely explained, almost all SF readers can recite a quick list of books that belong there by right.

These are books which SF readers recommend to friends: "This isn't SF, but it sure ain't mainstream and I think you might like it, okay?" It's every man his own marketer, when it comes to slipstream.

In preparation for this essay, I began collecting these private lists. My master-list soon grew impressively large, and serves as the best pragmatic evidence for the actual existence of slipstream that I can offer at the moment.

I myself don't pretend to be an expert in this kind of writing. I can try to define the zeitgeist of slipstream in greater detail, but my efforts must be halting.

It seems to me that the heart of slipstream is an attitude of peculiar aggression against "reality." These are fantasies of a kind, but not fantasies which are "futuristic" or "beyond the fields we know." These books tend to sarcastically tear at the structure of "everyday life."

Some such books, the most "mainstream" ones, are non-realistic literary fictions which avoid or ignore SF genre conventions. But hard-core slipstream has unique darker elements. Quite commonly these works don't make a lot of common sense, and what's more they often somehow imply that *nothing we know makes* "a lot of sense" and perhaps even that *nothing ever could*.

It's very common for slipstream books to screw around with the representational conventions of fiction, pulling annoying little stunts that suggest that the picture is leaking from the frame and may get all over the reader's feet. A few such techniques are infinite regress, trompe-l'oeil effects, metalepsis, sharp violations of viewpoint limits, bizarrely blase' reactions to horrifically unnatural events ... all the way out to concrete poetry and the deliberate use of gibberish. Think M. C. Escher, and you have a graphic equivalent.

Slipstream is also marked by a cavalier attitude toward "material" which is the polar opposite of the hard-SF writer's "respect for scientific fact."

Frequently, historical figures are used in slipstream fiction in ways which outrageously violate the historical record. History, journalism, official statements, advertising copy ... all of these are grist for the slipstream mill, and are disrespectfully treated not as "real-life facts" but as "stuff," raw material for collage work. Slipstream tends, not to "create" new worlds, but to *quote* them, chop them up out of context, and turn them against themselves.

Some slipstream books are quite conventional in narrative structure, but nevertheless use their fantastic elements in a way that suggests that they are somehow *integral* to the author's worldview; not neat-o ideas to kick around for fun's sake, but something in the nature of an inherent dementia. These are fantastic elements which are not clearcut "departures from known reality" but ontologically *part of the whole mess*; "`real' compared to what?" This is an increasingly difficult question to answer in the videocratic 80s-90s, and is perhaps the most genuinely innovative aspect of slipstream (scary as that might seem).

A "slipstream critic," should such a person ever come to exist, would probably disagree with these statements of mine, or consider them peripheral to what his genre "really" does. I heartily encourage would-be slipstream critics to involve themselves in heady feuding about the "real nature" of their as-yet-nonexistent genre. Bogus self-referentiality is a very slipstreamish pursuit; much like this paragraph itself, actually. See what I mean?

My list is fragmentary. What's worse, many of the books that are present probably don't "belong" there. (I also encourage slipstream critics to weed these books out and give convincing reasons for it.)

Furthermore, many of these books are simply unavailable, without hard work, lucky accidents, massive libraries, or friendly bookstore clerks in a major postindustrial city. In many unhappy cases, I doubt that the authors themselves think that anyone is interested in their work. Many slipstream books fell through the yawning cracks between categories, and were remaindered with frantic haste.

And I don't claim that all these books are "good," or that you will enjoy reading them. Many slipstream books are in fact dreadful, though they are dreadful in a different way than dreadful science fiction is. This list happens to be prejudiced toward work of quality, because these are books which have stuck in people's memory against all odds, and become little tokens of possibility.

I offer this list as a public service to slipstream's authors and readers. I don't count myself in these ranks. I enjoy some slipstream, but much of it is simply not to my taste. This doesn't mean that it is "bad," merely that it is different. In my opinion, this work is definitely not SF, and is essentially alien to what I consider SF's intrinsic virtues.

Slipstream does however have its own virtues, virtues which may be uniquely suited to the perverse, convoluted, and skeptical tenor of the postmodern era. Or then again, maybe not. But to judge this genre by the standards of SF is unfair; I would like to see it free to evolve its own standards.

Unlike the "speculative fiction" of the 60s, slipstream is not an internal attempt to reform SF in the direction of "literature." Many slipstream authors, especially the most prominent ones, know or care little or nothing about SF. Some few are "SF authors" by default, and must struggle to survive in a genre which militates against the peculiar virtues of their own writing.

I wish slipstream well. I wish it was an acknowledged genre and a workable category, because then it could offer some helpful, brisk competition to SF, and force "Science Fiction" to redefine and revitalize its own principles.

But any true discussion of slipstream's genre principles is moot, until it becomes a category as well. For slipstream to develop and nourish, it must become openly and easily available to its own committed readership, in the same way that SF is today. This problem I willingly leave to some inventive bookseller, who is openminded enough to restructure the rackspace and give these oppressed books a breath of freedom.

THE SLIPSTREAM LIST

ACKER, KATHY - Empire of the Senseless

ACKROYD, PETER - Hawksmoor; Chatterton

ALDISS, BRIAN - Life in the West

ALLENDE, ISABEL - Of Love and Shadows; House of Spirits

AMIS, KINGSLEY - The Alienation; The Green Man

AMIS, MARTIN - Other People; Einstein's Monsters

APPLE, MAX - Zap; The Oranging of America

ATWOOD, MARGARET - The Handmaids Tale

AUSTER, PAUL - City of Glass; In the Country of Last Things

BALLARD, J. G. - Day of Creation; Empire of the Sun

BANKS, IAIN - The Wasp Factory; The Bridge

BANVILLE, JOHN - Kepler; Dr. Copernicus

BARNES, JULIAN - Staring at the Sun

BARTH, JOHN - Giles Goat-Boy; Chimera

BARTHELME, DONALD - The Dead Father

BATCHELOR, JOHN CALVIN - Birth of the People's Republic of Antarctica

BELL, MADISON SMARTT - Waiting for the End of the World

BERGER, THOMAS - Arthur Rex

BONTLY, THOMAS - Celestial Chess

BOYLE, T. CORAGHESSAN - Worlds End; Water Music

BRANDAO, IGNACIO - And Still the Earth

BURROUGHS, WILLIAM - Place of Dead Roads; Naked Lunch; Soft Machine; etc.

CARROLL, JONATHAN - Bones of the Moon; Land of Laughs

CARTER, ANGELA - Nights at the Circus; Heroes and Villains

CARY, PETER - Illywhacker; Oscar and Lucinda

CHESBRO, GEORGE M. - An Affair of Sorcerers

COETZEE, J. M. - Life and rimes of Michael K.

COOVER, ROBERT - The Public Burning; Pricksongs & Descants

CRACE, JIM - Continent

CROWLEY, JOHN - Little Big; Aegypt

DAVENPORT, GUY - Da Vincis Bicycle; The Jules Verne Steam Balloon

DISCH, THOMAS M. - On Wings of Song

DODGE, JIM - Not Fade Away

DURRELL, LAWRENCE - Tunc; Nunquam

ELY, DAVID - Seconds

ERICKSON, STEVE - Days Between Stations; Rubicon Beach

FEDERMAN, RAYMOND - The Twofold Variations

FOWLES, JOHN - A Maggot

FRANZEN, JONATHAN - The Twenty-Seventh City

FRISCH, MAX - Homo Faber; Man in the Holocene

FUENTES, CARLOS - Terra Nostra

GADDIS, WILLIAM - JR; Carpenters Gothic

GARDNER, JOHN - Grendel; Freddy's Book

GEARY, PATRICIA - Strange Toys; Living in Ether

GOLDMAN, WILLIAM - The Princess Bride; The Color of Light

GRASS, GUNTER - The Tin Drum

GRAY, ALASDAIR - Lanark

GRIMWOOD, KEN - Replay

HARBINSON, W. A. - Genesis; Revelation; Otherworld

HILL, CAROLYN - The Eleven Million Mile High Dancer

HJVRTSBERG, WILLIAM - Gray Matters; Falling Angel

HOBAN, RUSSELL - Riddley Walker

HOYT, RICHARD - The Manna Enzyme

IRWIN, ROBERT - The Arabian Nightmares

ISKANDER, FAZIL - Sandro of Chegam; The Gospel According to Sandro

JOHNSON, DENIS - Fiskadoro

JONES, ROBERT F. - Blood Sport; The Diamond Bogo

KINSELLA, W. P. - Shoeless Joe

KOSTER, R. M. - The Dissertation; Mandragon

KOTZWINKLE, WILLIAM - Elephant Bangs Train; Doctor Rat, Fata Morgana

KRAMER, KATHRYN - A Handbook for Visitors From Outer Space

LANGE, OLIVER - Vandenberg

LEONARD, ELMORE - Touch

LESSING, DORIS - The Four-Gated City; The Fifth Child of Satan

LEVEN, JEREMY - Satan

MAILER, NORMAN - Ancient Evenings

MARINIS, RICK - A Lovely Monster

MARQUEZ, GABRIEL GARCIA - Autumn of the Patriarch; One Hundred Years of Solitude

MATHEWS, HARRY - The Sinking of the Odradek Stadium

McEWAN, IAN - The Comfort of Strangers; The Child in Time

McMAHON, THOMAS - Loving Little Egypt

MILLAR, MARTIN - Milk, Sulphate and Alby Starvation

MOONEY, TED - Easy Travel to Other Planets

MOORCOCK, MICHAEL - Laughter of Carthage; Byzantium Endures; Mother London

MOORE, BRIAN - Cold Heaven

MORRELL, DAVID - The Totem

MORRISON, TONI - Beloved; The Song of Solomon

NUNN, KEN - Tapping the Source; Unassigned Territory

PERCY, WALKER - Love in the Ruins; The Thanatos Syndrome

PIERCY, MARGE - Woman on the Edge of Time

PORTIS, CHARLES - Masters of Atlantis

PRIEST, CHRISTOPHER - The Glamour; The Affirmation

PROSE, FRANCINE - Bigfoot Dreams, Marie Laveau

PYNCHON, THOMAS - Gravity's Rainbow; V; The Crying of Lot 49

REED, ISHMAEL - Mumbo Jumbo; The Terrible Twos

RICE, ANNE - The Vampire Lestat; Queen of the Damned

ROBBINS, TOM - Jitterbug Perfume; Another Roadside Attraction

ROTH, PHILIP - The Counterlife

RUSHDIE, SALMON - Midnight's Children; Grimus; The Satanic Verses

SAINT, H. F. - Memoirs of an Invisible Man

SCHOLZ, CARTER & HARCOURT GLENN - Palimpsests

SHEPARD, LUCIUS - Life During Wartime

SIDDONS, ANNE RIVERS - The House Next Door

SPARK, MURIEL - The Hothouse by the East River

SPENCER, SCOTT - Last Night at the Brain Thieves Ball

SUKENICK, RONALD - Up; Down; Out

SUSKIND, PATRICK - Perfume

THEROUX, PAUL - O-Zone

THOMAS, D. M. - The White Hotel

THOMPSON, JOYCE - The Blue Chair; Conscience Place

THOMSON, RUPERT - Dreams of Leaving

THORNBERG, NEWTON - Valhalla

THORNTON, LAWRENCE - Imagining Argentina

UPDIKE, JOHN - Witches of Eastwick; Rogers Version

VLIET, R. G. - Scorpio Rising

VOLLMAN, WILLIAM T. - You Bright and Risen Angels

VONNEGUT, KURT - Galapagos; Slaughterhouse-Five

WALLACE, DAVID FOSTER - The Broom of the System

WEBB, DON - Uncle Ovid's Exercise Book

WHITTEMORE, EDWARD - Nile Shadows; Jerusalem Poker; Sinai Tapestry

WILLARD, NANCY - Things Invisible to See

WOMACK, JACK - Ambient; Terraplane

WOOD, BARI - The Killing Gift

WRIGHT, STEPHEN - M31: A Family Romance

From SCIENCE FICTION EYE #6

CATSCAN 6

"Shinkansen"

Let me tell you what the 21st Century feels like.

Imagine yourself at an international conference of industrial designers in Nagoya, Japan. You're not an industrial designer yourself, and you're not quite sure what you're doing there, but presumably some wealthy civic-minded group of Nagoyans thought you might have entertainment value, so they flew you in. You're in a cavernous laser-lit auditorium with 3,000 assorted Japanese, Finns, Germans, Americans, Yugoslavs, Italians, et al., all wearing identical ID badges, except for a trenchant minority, who have scribbled "Allons Nagoya" on their badges so that everybody will know they're French.

There's a curved foam plug stuck in your ear with a thin gray cord leading to a black plastic gadget the size of a deck of cards. This is an "ICR-6000 Conference Receiver." It's a five-channel short-range radio, with a blurry typed serial number stuck to it with a strip of Scotch Tape. You got the receiver from a table manned by polite young hostesses, who were passing out vast heaps of these items, like party favors. Of the five channels offered, Number 1 is Japanese and Number 2 is, purportedly, English. You get the strong impression that the French would have preferred Number 3 to be French, but the Conference offers only two "official languages" and channels 3, 4 and 5 have static.

Muted festivities begin, in the best of taste. First a brief Kabuki skit is offered, by two expatriate Canadians, dressed in traditional robes. Ardent students of the Kabuki form, the two Canadians execute ritual moves of exacting precision, accompanied by bizarre and highly stylized verbal bellowing. They are, however, speaking not Japanese but English. After some confusion you realize that this piece, "The Inherited Cramp," is meant to be a comic performance. Weak culture-shocked chuckles arise here and there from the more adventurous members of the audience. Toward the end you feel that you might get used to this kind of thing if you saw enough of it.

The performance ends to the warm applause of general relief. Assorted bigwigs take the stage: a master of ceremonies, the keynote speaker, the Mayor of Nagoya, the Speaker of the City Council, the Governor of the Prefecture. And then, accompanied by a silverhaired retainer of impressive stolid dignity, comes the Crown Prince of Japan.

Opening ceremonies of this kind are among the many obligations of this patient and graceful young aristocrat. The Crown Prince wears a truly immaculate suit which, at an impolite guess, probably costs as much as a small car. As a political entity, this symbolic personage is surrounded by twin bureaucracies of publicity and security. The security is not immediately evident. Only later will you discover that the entire building has been carefully sealed by unobtrusive teams of police. On another day, you will witness the passage of the Prince's motorcade, his spotless armored black limousine sporting the national flag, accompanied by three other limos of courtier-bodyguards, two large squads of motorcycle policemen, half-a-dozen police black-and-whites, and a chuttering surveillance helicopter. As you stand gawking on the sidewalk you will be questioned briefly, in a friendly fashion, by a plainclothes policeman who eyes the suspicious bag you carry with a professional interest.

At the moment, however, you are listening to the speeches of the Nagoya politicians. The Prince, his posture impeccable, is also listening, or at least pretending it with a perfect replica of attention. You listen to the hesitant English on Channel Two with growing amazement. Never have you heard political speeches of such utter and consummate vacuity. They consist entirely of benevolent cliche'. Not a ripple of partisan fervor, not a hint of ideological intent, colors the translated oratory. Even the most vapid American, or even Russian, politician cannot resist a dig at a rival, or an in-crowd reference to some partisan bit of political-correctness--but this is a ritual of a different order. It dawns on you that nothing will be said. These political worthies, sponsors and financiers of the event, are there to color the air with harmless verbal perfume. "You're here, we're here"--everything that actually needs to be said has already been communicated nonverbally.

The Prince rises to deliver a brief invocation of even more elevated and poetic meaninglessness. As he steps to the podium, a torrent of flashbulbs drenches the stage in stinging electrical white. The Prince, surely blinded, studies a line of his text. He lifts his chin, recites it, and is blinded again by the flashes. He looks back to the speech, recites a paragraph in a firm voice with his head lowered, then looks up again, stoically. Again that staccato blast of glare. It dawns on you that this is the daily nature of this young gentleman's existence. He dwells within a triple bell-jar of hypermediated publicity, aristocratic decorum, and paramilitary paranoia. You reflect with a mingled respect and pity on the numerous rare personages around the planet who share his unenviable predicament. Later you will be offered a chance to meet the Prince in a formal reception line, and will go out of your way to spare him the minor burden of your presence. It seems the least you can do.

Back in your hotel room, the vapid and low-key Japanese TV is interrupted by news of a severe California earthquake. By morning swarms of well-equipped Japanese media journalists will be doing stand-ups before cracked bridges in San Furansisko and Okran. Distressed Californian natives are interviewed with an unmistakable human warmth and sympathy. Japanese banks offer relief money. Medical supplies are flown in. No particular big deal is made of these acts of charitable solidarity. It's an earthquake; it's what one does.

You leave Nagoya and take the Shinkansen bullet-train back to Tokyo. It's a very nice train, the Shinkansen, but it's not from Mars or anything. There's been a lot of press about the Shinkansen, but it looks harmless enough, rather quaint actually, somewhat Art Deco with lots of brushed aircraft aluminum and stereo ads featuring American popstars. It's very clean, but like all trains it gets too cold inside and then it gets too hot. You've heard that bullet-trains can do 200 miles an hour but there's no way the thing tops 130 or so, while you're aboard it. You drink a ten percent carbonated peach soda and listen to your Walkman. The people inside this purported technical marvel demonstrate the absolute indifference of long habit.

A friend meets you in Tokyo. You board a commuter subway at rush-hour. It is like an extremely crowded rolling elevator. Everyone hangs limply from straps with inert expressions suggesting deep meditation or light hypnosis. Impetus rolls through the tightly-packed bodies like currents through a thick stand of kelp. It occurs to you that this is the first time you have been in Japan without attracting vaguely curious glances as a foreigner. Nobody is looking at anybody. Were any physical threat or commotion offered on this subway, the situation would swiftly be nightmarish. But since nobody stirs, the experience is actually oddly soothing.

You have a dinner appointment with a Japanese rock band. You meet in a restaurant in a section of Tokyo somewhat akin to, say, Greenwich Village in 1955. Its narrow, crooked streets are full of students, courting couples, coffee-shops. There's a bit of graffiti here and there--not the lashing, crazed graffiti of American urban areas, but enough to convey a certain heightened sense of dissidence.

You and your friend meet the two rock stars, their A&R man, and their manager. The manager drifts off when he realizes that there is no threat of any actual business transpiring. You're just a fan. With some translation help from your friend you eagerly question the musicians. You long to know what's cooking in the Tokyo pop-music scene. It transpires that these particular rockers listen mostly to electronic European dance music. Their biggest Japanese hit was a song about Paris sung in English.

One of the rockers asks you if you have ever tried electronic brain stimulation. No, you say--have you? Yes, but it wasn't much good, really. You recall that, except for occasional problems with junior yakuza bikers high on cheap Korean speed, Japan hasn't much of a "drug-problem." Everyone sighs wistfully and lights more cigarettes.

The restaurant you're in offers an indeterminate nonethnic globalized cuisine whose remote ancestry may have been French. The table is laid like, say, London in 1880, with butterballs in crystal glass dishes, filigreed forks as heavy as lead, fish-knives, and arcanely folded cloth napkins. You ask the musicians if this restaurant is one of their favorite dives. Actually, no. It's 'way too expensive. Eating in posh restaurants is one of those things that one just doesn't do much of in Japan, like buying gift melons or getting one's suit pressed. A simple ham and egg breakfast can cost thirty bucks easy--thirty-five with orange juice. Sane people eat noodles for breakfast for about a buck and a half.

Wanting to press this queer situation to the limit, you order the squid. It arrives and it's pretty good. In fact, the squid is great. Munching a tentacle in wine-sauce you suddenly realize that you are having a *really good time*. Having dinner with a Japanese rock band in Tokyo is, by any objective standard, just about the coolest thing you've ever done!

The 21st Century is here all around you, it's happening, and it's craziness, but it's not bad craziness, it's an *adventure*. It's a total gas. You are seized by a fierce sense of existential delight.

Everybody grins. And the A&R man picks up the tab.

Shinkansen Part Two:

The Increasingly Unstrange Case of

Lafcadio Hearn and Rick Kennedy

I was in Japan twice in 1989--two weeks in all. Big deal. This jaunting hardly makes me an "Old Japan Hand."

But I really wanted to mimic one in this installment of CATSCAN. So I strongly considered beginning with the traditional Westerner's declaration that I Understand Nothing About Japan or the Japanese: boy are they ever mystical, spiritual and inscrutable; why I've been a-livin' here nigh twenty year with my Japanese wife, Japanese job, Japanese kids and I'm just now a-scratchin' the surface of the baffling Yamato kokutai ...

These ritual declarations by career Nipponologists date 'way back to the archetypal Old Japan Hand, Lafcadio Hearn (aka Yakumo Koizumi) 1850-1904. Not coincidentally, this kind of rhetoric is very useful in making *yourself* seem impressively mystic, spiritual and inscrutable. A facade of inscrutable mysticism is especially handy if you're anxious to hide certain truths about yourself. Lafcadio Hearn, for instance--I love this guy Hearn, I've been his devotee for years, and could go on about him all day-Hearn was your basic congenital SF saint-perv, but in a nineteenth century environment. Hearn was, in brief, a rootless oddball with severe personality problems and a pronounced gloating taste for the horrific and bizarre. Born of a misalliance between a British officer and a young Greek girl, Hearn passed a classically miserable childhood, until fleeing to America at nineteen. As a free-lance journalist and part-time translator, penniless, shabby, declasse' and half-blind, Hearn knocked around all over for years--Cincinnati, New Orleans, the Caribbean--until ending up in Japan in 1890.

There Hearn made the gratifying discovery that the Japanese could not tell that he was a weirdo. At home Hearn was alien; in Japan, he was merely foreign. The Meiji-era Japanese respectfully regarded the junketing Hearn as an influential man of letters, an intellectual, a poet and philosopher, and they gave him a University position teaching literature to the rising new generation. Hearn (a man of very genuine talent, treated decently for perhaps the first time in his life) responded by becoming one of Japan's first and foremost Western popularizers, emitting reams about Shintoism and ghosts and soul-transference and the ineffableness of everythinghood.

Hearn had always been pretty big on ineffableness, but Japan seemed to fertilize the guy's eccentricities, and he became one of the truly great fantasy writers of all time. If you don't know Hearn's work, you owe it to yourself to discover it: Kokoro, Gleanings in Buddha-Fields, Shadowings, Kwaidan, Kotto, all marvelous books (thoughtfully kept in print by Tuttle Books, that paragon of crosscultural publishers). Hearn's dark fantasies rival Dunsany and Lovecraft in their intense, brooding idiosyncrasy; and as a bonus, his journalistic work contains long sustained passages of close observation and penetrating insight, as well as charming period flavor.

What did the Japanese make of all this? Well, after many years, the authorities finally caught on and fired Hearn -- and they had one of the first Tokyo University riots on their hands. Hearn was impossible to deal with, he was a paranoiac with a mean streak a mile wide, but his students genuinely loved the guy. Hearn really spoke to that generation--the generation of Japanese youth who found themselves in universities, with their minds permanently and painfully expanded with queer foreign ideas. Here was one sensei who truly knew their paradoxical sorrows, and shared them. Hearn's appeal to the new Japan was powerful, for he was simultaneously ultramodern and sentimentally antiquarian--an exotic patriot--a Western Orientalist--a scientific mystic.

Lafcadio Hearn loved Japan. He married a Japanese woman, had Japanese children, took a Japanese name, and was one of the bare handful of foreigners ever granted Japanese citizenship. And yet he was always a loner, a congenital outsider, viewing everyone around him through ever-thickening lenses of his peculiar personal philosophy. Paradoxically, I believe that Lafcadio Hearn chose to stay in Japan because Japan was the place that allowed him to become most himself. He reached some very personal apotheosis there.

But now let's compare the nineteenth-century Hearn to a contemporary "Old Japan Hand," Rick Kennedy, author of Home, Sweet Tokyo (published, rather tellingly, by Kodansha Books of Tokyo and New York). Rick Kennedy, an employee of the globe-spanning Sony Corporation, writes a weekly column for the English-language "Japan Times." Home, Sweet Tokyo is a collection of Kennedy's columns. The apt subtitle is "Life in a Weird and Wonderful City."

Compared to Hearn, Kennedy has very little in the way of philosophical spine. This is a magpie collection. Kennedy has an eye for the peculiar that rivals Hearn's, but no taste at all for the dark and horrific. Home, Sweet Tokyo is in fact "sweet" and rather cute, with all the boisterous charm of the upwardly mobile bourgeoisie. There are satires, parodies, in-jokes, vignettes of daily life in the great metropolis.

And there are interviews, profiles, of the people of Tokyo. Folks of all sorts: professional pachinko-players, the white-gloved guys who scrub the subway trains, the dignified chefs of top Tokyo restaurants, office-girls gamely searching for a rung on a very male corporate ladder.

Hearn did a similar sort of exploratory prying in Japan's nooks and byways, but the flavor of his reportage is entirely different. Hearn's Japanese subjects tend to be elfin, evasive personages, alluding to grave personal tragedies with a flicker of an eyelid and a few stoic verses. Hearn's subjects are not fully individuated men and women, but incarnated principles, abstractions, a source for social insights that can degenerate at a careless touch into racist or jingoistic cliche'.

Kennedy, in stark contrast, treats people as people, hail fellows well met. As a consequence, his Japan comes across rather like a very crowded but well-heeled Kiwanis Club. He lacks a morbid interest in life's extremities; but at least he never lashes his subjects to the Procrustean bed of stereotype. He looks clear-eyed at postmodern Japan in all its individual variety: eldritch rural grannies and megalopolitan two-year-olds, uptight accountants and purple-haired metal kids, Shinto antiquarians and red-hot techno-visionaries, rarefied literati and dumb-ass TV stars.

This is a Japan which can no longer be tidily filed away under "I" for "Inscrutable" by a WestCiv Establishment with the self-appointed task of ordering the world. Japan today is an intensely globalized society with sky- high literacy, very low crime, excellent life-expectancy, tremendous fashion-

sense, and a staggering amount of the electronic substance we used to call cash. After centuries of horrific vicissitudes and heartbreaking personal sacrifice, the Japanese are fat, rich, turbo-charged, and ready to party down. They are jazzing into the 21st-Century global limelight in their velcro'd sneakers, their jeans stuffed with spare film-packs and gold-plated VISA cards. Rick Kennedy's book makes it absolutely clear why the Japanese *fully deserve* to do this, and why all those Japan-bashing sourpuss spoilsports ought to lighten up and give 'em room to shine.

Like Hearn, Kennedy has a Japanese wife, Japanese children, an intense commitment to his adopted home. What has happened in the meantime (i.e., during the 20th century) is a slow process of "un-strange-ing," of deromanticism, de-exoticism, a change from watery dream-colors to the sharp gleam of flashbulbs and neon. It is a process that science fiction people, as romantics, are likely to regard with deep ambiguity. We are much cozier with the Hearns of the world than the brisk and workaday Kennedys.

And yet I must return to Hearn's Paradox: that his attempt to "woo the Muse of the Odd," as he put it, was not a true marriage, but a search for self-realization. Kennedy, unlike Hearn, can embrace Otherness without seeking moral lessons and mystic archetypes. Kennedy, unlike Hearn, can imagine himself Japanese. He goes farther yet, for Kennedy knows that if he *were* Japanese, he would not live in Tokyo. A Japanese Rick Kennedy, he says, would head at once for Los Angeles, that weird and wonderful city, with its exotic Yankee luxuries of crowd-free tennis courts and private swimming pools.

And this, it seems to me, is a very worthy insight. This is a true, postmodern, global cosmopolitanism, rather than Hearn's romantic quest for Asian grails and unicorns. Cosmopolitanism offers little in the way of spine-chilling visionary transcendence. Instead, the glamour of Otherness is internalized, made part of the fabric of daily life. To the global cosmopolite--an eternal expatriate, no matter what his place of birth--there are no certainties, no mystic revelations; there are only fluctuating standards of comparison. The sense-of-wonder is not confined to some distant realm of Zen or Faerie, safely idealized and outside oneself; instead, *normality itself* seems more or less disjointed and disquieting, itchy with a numinous glow of the surreal, "weird and wonderful," as Kennedy says--with the advantage/drawback that this feeling *never goes away*.

I would urge on every science fiction person the rich experience of reading Lafcadio Hearn. I share his fascination with thee culture of historical Japan, the world before the black ships; like Hearn I can mourn its loss. But it's dead, even if its relics are tended in museums with a nervous care. SFpeople need to dote a little less on the long-ago and far-away, and pay more robust attention to the living: to the elaborate weirdness at work in our own time. Writers of serious science fiction need to plunge out there into the bustle and do some basic legwork and come up with some futures people can believe in. We need to address a new audience: not just the usual SF faithful, but the real no-kidding folks out there, the global populace, who can see an old world order disintegrating every time they turn on the TV, but have no idea what to make of it, what to think about it, what to do. We need to go beyond using exotic foreigners as templates for our own fantasies; we need to find the common ground of common global issues. At the very first and least, we need to demand more translation-work within our own genre. We need to leap the Berlin Walls of national marketing and publishing. We need to get in touch.

The walls are going down all over the world, and soon we'll all be in each other's laps. Japan's just one country, it's not the be-all and end-all. But Japan is very crowded, with strictly limited resources; because of that, Japan today is a dry run under 21st-century conditions. It's not the only such model; Lebanon and El Salvador are small and crowded too. These places model possible futures; they are choices we can make. It's all the choice between a sake bash in the Tokyo Disneyland and a hostage-seizure in a bombed-out embassy. We must learn from these successes and mistakes; learn about other people, learn from other people, learn to *be* other people.

We can do it. It's not all that hard. It's fun, even. Everybody can help. It doesn't take transcendent effort or coaching by cultural pundits. Do one six- billionth of the work of global understanding, and you have every right to feel proud of yourself.

The subworld of SF has the advantage of (limited) international appeal, and can do good work here. If we don't do something, some earnest attempt to understand and explicate and shape the future--the *real* future, everybody's future, starting *now*--then in all honesty we should abandon "Science Fiction" as a genre. We shouldn't keep the rags and tatters of the thing, while abandoning its birthright and its best native claim to intellectual legitimacy. There are many worthy ways to write fiction, and escapist genres aplenty for people who want to write amusing nonsense; but this genre ought to stand for something.

SF can rise to this challenge. It ain't so tough. SF has risen from the humblest of origins to beat worse odds in the past. We may be crazy but we ain't stupid. It's a little-known fact (in which I take intense satisfaction) that there are as many subscribers to *SF Eye* in Japan as there are in the US and Canada. It's a step. I hope to see us take many more. Let's blunder on out there, let's take big risks and make real mistakes, let's utter prophecies and make public fools of ourselves; we're science fiction writers, that's our goddamn job. At least we can plead the limpid purity of our intentions. Yoroshiku onegai itashimasu.

CATSCAN 7

"My Rihla"

Abu 'Abdallah ibn Battuta, gentleman and scholar, late of Tangier, Morocco, has been dead for six hundred and thirty years. To be remembered under such circumstances is a feat to compel respect.

Ibn Battuta is known today because he happened to write a book--or rather, he dictated one, in his retirement, to a Granadian scribe--called _A Gift to the Observers, Concerning the Curiosities of Cities and the Marvels Encountered in Travels_. It's more often known as "The Rihla of Ibn Battuta," rihla being an Arabic literary term denoting a pious work concerned with holy pilgrimage and foreign travel.

Sometimes known as "the Marco Polo of Islam," Ibn Battuta claimed to have traveled some seventy thousand miles during the years 1325-1354, visiting China, Arabia, India, Ghana, Constantinople, the Maldive Islands, Indonesia, Anatolia, Persia, Iraq, Sicily, Zanzibar ... on foot, mind you, or in camel caravans, or in flimsy medieval Arab dhows, sailing the monsoon trade winds.

Ibn Battuta travelled for the sake of knowledge and spiritual advancement, to meet holy men, and to listen to the wisdom of kings, emirs, and atabegs. On occasion, he worked as a judge or a courtier, but mostly he dealt in information--the gossip of the road, tales of his travels, second-hand homilies

garnered from famous Sufi mystics. He covered a great deal of territory, but mere exploration was not the source of his pride.

Mere distance mattered little to Ibn Battuta - in any case, he had a rather foggy notion of geography. But his Moslem universe was cosmopolitan to an extent unrivalled 'till the modern era. Every pious Moslem, from China to Chad, was expected to make the holy pilgrimage to Mecca--and they did so, in vast hordes. It was a world on the move. In his twenty-year peregrinations. Ibn Battuta met the same people again and again. An Arab merchant, for instance, selling silk in Qanjanfu, China, whose brother sold tangerines in Fez (or fezzes in Tangier, presumably, when he got the chance). "How far apart they are," Ibn Battuta commented mildly. It was not remarkable.

Travel was hazardous, and, of course, very slow. But the trade routes were open, the caravanserais-- giant government-supported hotels, sometimes capable of housing thousands--were doing a brisk trade from Cairo to Delhi to Samarkand. The locals were generally friendly, and respectful of learned men--sometimes, so delighted to see foreigners that they fell upon them with sobs of delight and fought for the prestige of entertaining them.

Professor Ross Dunn's narrative of _The Adventures of Ibn Battuta_ made excellent, and perhaps weirdly apt, reading last April, as I was traveling some thirty thousand feet above the North Atlantic in the boozy tin-can comfort of a KLM 747.

"God made the world, but the Dutch made Holland." This gross impiety would have shocked the sufi turban off the valorous Ibn Battuta, but we live

today, to paraphrase Greg Bear, in a world of things so monstrous that they have gone past sin and become necessity. Large and prosperous sections of the Netherlands exist well below sea level. God forbid the rest of us should have to learn to copy this trick, but when I read the greenhouse-warming statistics I get a shuddery precognitive notion of myself as an elderly civil-defense draftee, heaving sandbags at the angry rising foam ...

That's not a problem for the Dutch at the moment. They do, however, currently find themselves confronting another rising tide. "The manure surplus." The Dutch are setting up a large government agro-bureaucracy to monitor, transport, and recycle, er, well, cowshit. They're very big on cheese, the Dutch, but every time you slice yourself a tasty yellow wedge of Gouda, there is somewhere, by definition, a steaming heap of manure. A completely natural substance, manure; nitrogen, carbon and phosphorous, the very stuff of life--unless *there's too much of it in one place at the same time*, when it becomes a poisonous stinking burden. What goes around, comes around--an ecological truism as painful as constipation. We can speculate today about our own six hundred year legacy: not the airy palaces of the Moorish Alhambra, I'm afraid, or the graceful spires of the Taj Mahal, but billions of plastic-wrapped disposable diapers, mashed into shallow graves ...

So I'm practicing my Arab calligraphy in my scholarly cell at the Austin madrassa, when a phone call comes from The Hague. Over the stellar hiss of

satellite transmission, somebody wants me and my collaborator to talk about cyberspace, artificial reality, and fractals. Fair enough. A month later I'm sipping Coke and puffing Dunhills in tourist class, with a bag full of computer videotapes crammed in the overhead bin, outdistancing Ibn Battuta with no effort more strenuous than switching batteries in a Walkman.

Aboard the plane, I strike up a discussion with a young Italian woman--half-Italian, maybe, as her father is an Iranian emigre'. She calls herself a

"Green," though her politics seem rather strange-she sympathizes openly with the persecuted and misunderstood white Afrikaaners, for instance, and she insists that the Ayatollah Khomeini was an agent of British Intelligence. I have a hard time following these arguments, but when it comes to the relations of the US and Europe, her sentiments are clear enough. "After '92, we're going to kick your ass!" she tells me.

Unheard of. Europeans used to marvel humbly over our astonishing American highway system and the fact that our phones work (or used to). That particular load of manure is now history. The Europeans are happening now, and they know it. 1989 was a pivotal year for them, maybe the most momentous popular upheaval since 1789.

This century has not been a good one for Europe. Since 1914, the European body-politic has been wheezing along on one lung, a mass of fresh scar tissue when it wasn't hemorrhaging blood and bile. But this century, "The American Century," as we used to call it in 1920 when there was a lot of it still before us, is almost gone now. A lot can happen in a century. Dynasties rise and fall. Philosophies flourish and crumble. Cities rise, thrive, and are sacked by Mongols and turned to dust and ghosts.

But in Europe today, the caravanserais are open. National borders in Europe, which provoked the brutal slaughter of entire generations in '14 and '44, have faded to mere tissues, vaporous films, riddled through-and-through with sturdy money-lined conduits of trade, tourism, telecommunications. Soon the twelve nations of the European Community will have one passport, perhaps one currency. They look to the future today with an optimism they have not had since "the lamps went out all over Europe" in World War One.

(Except perhaps for one country, which still remains mired in the Cold War and a stubborn official provincialism: Britain. The Dutch feel sorry for Britain: declining, dirty, brutalized, violent and full of homeless--far too much, in short, like their too-close friends, the Americans.)

My Italian acquaintance introduces me to her mother, who is a passionate devotee of Shirley MacLaine. Mom wears an Iranian gold bracelet the size of rappers' jewelry, a diamond-studded knuckleduster. Her husband, the Iranian emigre', is an architect. His family was close to the Shah, and is now a scattered Moslem hejira in a dozen Western capitals, plotting vengeance in desultory fashion, like so many White Russians in 1929. They may have a long wait. Father looks rather tired.

Off the plane, jet-lagged to hell and gone, in Amsterdam. A volunteer for the Image and Sound Festival drives me to The Hague in a very small car on a very large autobahn. Windmills here and there. Days later I inspect a windmill closely, a multistory preindustrial power-station of sailwork, levers, gears and thatch. An incarnation of a late-medieval tech that America simply never possessed. A somehow monstrous presence fit to scare the hauberk off Don Quixote.

The Hague is a nineteenth-century government town of close-packed four-story townhouses. The pavements, built on sand, ripple and warp like the

sagging crust of an old pie. Advertisements in the bus-stops brutally abolish any air of the antique, though: "Mag ik u iets persoonlijks faxen? De Personal

Fax van Canon. CANON--Meeten al een Voorsprong!" Dutch is close enough to English to nag at the ear, but it's landmined with liquid vowels and odd gutturals. The streets--"straats"--are awash with aging Euro baby-boomers, leavened with a Dutch-born populace of imperial emigres -- Dutch-Indonesian, Dutch-Surinamese, Dutch-Chinese.

On Wednesday, Moluccan separatists bombarded the Indonesian embassy, near my hotel, with Molotov cocktails. A dozen zealots were injured. Nobody

outside Holland and Indonesia know much about the Moluccans, an Asian Moslem ethnic group with a nationalistic grievance. They'd love to raise hell at home in Indonesia, but when they do they're shot out of hand by fascist police with teeth like Dobermans, so they raise a stink in the old Mother Country instead, despite the fact that Holland can do almost nothing for them. Europe is full of exiles--and full of its own micro-nations: the Flemish, the Magyars, Gypsies, Corsicans and Bretons, Irish who remember Cromwell, Jews who remember Nebuchadnezzar, Basques who remember Hannibal, all like yesterday.

Ibn Battuta's world was similarly polyglot, and divided into "nations," too, run by mamelukes and moghuls who doted on tossing dissidents to packs of

ravenous man-eating dogs. Muhammed Tughlug, the radiant Sultan of Delhi, punished rebels (very loosely defined) by having them cut in half, skinned alive, and/or tossed aloft by trained elephants with swords strapped to their tusks. It was bad news to cross these worthies, and yet their borders meant little, and ethnicity even less. A believer was a citizen anywhere in Islam, his loyalties devoted to Civilization--the sacred law of the Prophet--and then

to his native city. Ibn Battuta was not a "Moroccan" countryman or a "Berber" ethnic, but first a learned Islamic scholar, and second a man of Tangier.

It may soon be much the same in Europe--a vague attachment to "Western democratic ideals," while one's sense of patriotism is devoted, not to one's so-called country, but to Barcelona or Amsterdam, Marseille or Berlin. (Cities, mind you, with populations every bit as large as entire nations of the medieval world.) At this period in history, the aging institution of the nation-state is being torn from above and below-below by ethnic separatists, above by the insistent demands of multinational commerce and the global environment.

Is there a solution for the micronations-- besides, that is, the dark horrific example of the "Final Solution?" Maybe. Let the Lithuanians "go"-- give them "freedom"--but with no local currency, no local army, no border tariffs or traffic control, no control over emigration, and with the phones and faxes open 24 hours a day. What is left? City-level government, in a loose ecumenicum.

A good trick, if anyone could pull it off. It's contrary to our recent political traditions, so it seems far-fetched and dangerous. But it's been done before. Six hundred years ago, in another world ... The fourteenth century, what Barbara Tuchman called A Distant Mirror.

In Alanya, a city of medieval Anatolia, Ibn Battuta had his first introduction to the interesting organization known as the fityan. He was invited to dinner by a remarkably shabby man in an odd felt hat. Ibn Battuta accepted politely, but doubted that the young fellow had enough money to manage a proper feast.

His interpreter laughed, for the shabby young man was a powerful sheik of the fityan. "The fityans were corporations of unmarried young men representing generally the artisan classes of Anatolian towns ... The code of conduct and initiation ceremonies were founded on a set of standards and values that went by the name of futuwwa ... referring in concept to the Muslim ideal of the `youth' (fata) as the exemplary expression of the qualities of nobility, honesty, loyalty and courage. The brothers of the fityan were expected to lead lives approaching these ideal qualities, including demonstrations of generous hospitality to visiting strangers ... By the thirteenth or fourteenth centuries, the fityan associations existed in probably every Anatolian town of any size. In an era of political upheaval and fragmentation ... the fityan were filling a crucial civic function of helping to maintain urban cohesiveness ..."

Far from humble poverty, Ibn Battuta found his medieval youth-culture hosts occupying a fine downtown lodge crammed with pricey Byzantine rugs and Iraqi glassware. The lads were dressed to the nines in long cloaks, boots, knife-decked cummerbunds and snazzy white bonnets with pointed white peaks two feet high. "They brought in a great banquet, with fruits and sweetmeats, after which they began their singing and dancing." He was "greatly astonished at their generosity and innate nobility."

No more so, perhaps, than myself and my Canadian caravan companion when we found ourselves in a retrofitted nineteenth-century stove factory downtown in The Hague. Now a filmhouse, it was crammed with young Dutch media-devotees in the current multinational fityan get-up of black jeans and funny haircuts. Their code of conduct was founded in a set of standards and values that goes by the name of "cool." Six hundred years from now, the names of Mark Pauline, Laurie Anderson and Jean Baudrillard may mean little, but at the moment they are the stuff of a Sufi-like mystical bond.

We gave them a few names and second-hand homilies: Mandelbrot, ART-MATRIX, Amygdala, Jaron Lanier, Ryoichiro Debuchi--with addresses and fax numbers. We are pagans, of course, and we have video screens; but basically little happened that would have surprised the lads of the fityan--except for the shocking anomaly that many of us were women.

In his travels through Anatolia, Ibn Battuta stayed with no less than 25 separate fityans. But then, he was a professional.

In my time off, I tramped the streets seeking the curiosities of cities and the marvels encountered in travels. Would the hashish have surprised Ibn Battuta? I rather doubt it. You can buy hashish in The Hague in little plastic bags, for about six bucks a pop, quite openly. A hole-in-the-wall place called The Jukebox offers a varied menu: Senegalese marijuana, Swazie, Columbian, Sensemilla ... and various global subspecies of hash: Chocolata, Ketama, Kabul, Sputnik, Zero-Zero ... It's a teenage thing, bubblegum. They're not allowed in bars, Dutch teenagers. They have to smoke this harmless hashish stuff instead. They seem rather moody and somber about it, for they don't kick up their heels, scream, giggle, or frighten the horses. They just get red-eyed and a bit sluggish, and listen to old Motown records while sipping orange soda and playing of all things, backgammon. They huff hash like monsters and nobody thinks a damn thing of it. Shocking.

In the Maldive Islands, Ibn Battuta was appointed a judge. The lax and easy life of the tropical Indian seas offended his sense of propriety. Once he sentenced a thief to have his right hand severed, a standard punishment by the sacred law, and several sissy Maldivans in the council hall fainted dead away at the sight of it. The women were worse yet. Most Maldivan women, he related, "wear only a waist-wrapper which covers them from the waist to the lowest part, but the remainder of their body remains uncovered. Thus they walk about in the bazaars and elsewhere. When I was appointed judge there, I strove to put an end to this practice and commanded the women to wear clothes; but I could not get it done. I would not let a woman enter my court to make a plaint unless her body were covered; beyond this, however, I was unable to do anything."

Poor fellow. Later in his career Ibn Battuta had the good luck to accompany a slave-train of six hundred African women as they were force-marched across the blistering Sahara. There was a great deal of money in the slave-trade; its master-traders were very well-respected. Ibn Battuta owned several slaves in his career, but he was an unlucky master; they could not keep up with his restless migrations, and drowned, or froze, or fell ill, or were sold. He does not keep count of the number of children he sired, but there were many, mostly by slave-women.

What atrocities are we committing today, that we too take in stride?

History lives in the Mauritshuis, shelter to a horde of Rembrandts and Vermeers. Portraits--with that pre-photographic intensity that an image had when it was one-of-a-kind, likely the only visual record of the sitter that would ever be made. The portraits are formalized, flattering, very studied, and they lie a lot. The children of the rich pick garlands of flowers in unlikely getups of velvet and chiffon, expensive fabrics that a grass-stain would ruin forever. This kind of portraiture is a dead visual language now, and when the language no longer works, the lies become evident, like someone else's old propaganda.

It was a rich and earthy life. Leather, wood, wool, bloody still-life heaps of slaughtered game. A woman in satin rides side-saddle with a boar-spear in one dainty gauntlet. Huntsmen let fly with flintlock muskets at a foam-snorting pig. The sky has never known an airplane; these are clouds that have never been seen from above, fleecy and untainted by smog.

But there is honesty, too. Vermeer's famous Girl in a Blue Turban is not posed, but caught in an instant in the mind's eye. She is plainly dressed, and her sweet frail face strikes the viewer in a sudden rush, the very opposite of all those formal images of Dutch aristos with unearned power and too much

jewelry.

Here are Rembrandt's self-portraits--a big-nosed kid of twenty-two or so, striking a pose in fake-looking armor, the detail excellent, but perhaps a bit forced. Transmuted by time and experience, he becomes a big-nosed saggy-eyed veteran, a gold pendant in one earlobe. Less youth--but more gold. And a lightening-quick brushwork that catches the play of light with an almost frightening ease.

Flattery was their stock in trade. They knew it was a shuck, a stunt, a trick. Ever notice how good artists can make each other look? With their palettes hooked over their thumbs they resemble philosopher-kings. The big money was in flattery, but they were restless. Here and there real-life boils out in a rush. J. V. D. Heyde (1637-1712) paints the Jesuit Church of Dusseldorf. A couple of black-clad Jesuits tramp the street talking, very likely up to no good. A beggar-woman nurses a baby, with an older kid taking alms in the gutter. Who is the father? Ibn Battuta? Some working-stiff and his wife push a monster wheelbarrow up the hill, putting their backs into it. Dogs piss and tussle, and loungers bowl ninepins in the public square.

F. van Mieris (1635-1681) clearly spent a lot of time in bars. Here, taken from low-life, is a wasted blonde barmaid in a white dress, pouring wine for a

redheaded captain-at-arms. In the doorway, a red dog fucks a white bitch, a symbol as stone-obvious as being hit in the head with a bung-starter.

A block away from the Mauritshuis is a shopping district, the streets bent and skinny and pre-automotive, an open-air mall. MEGA WORLD COMPUTERWINKELS, reads the sign outside the software shop. Soon all Europe will be mega world computerwinkels, cool nets of data, a cybernetic Mecca. Our Mecca will be electronic, and you'll be a nobody 'till you've made that sacred pilgrimage.

We look to the future. Extrapolation is powerful, but so is analogy, and history's lessons must be repeated helplessly, until they are seen and understood and deliberately broken. In 54 Javastraat, the Ambassade van Iran has telecameras trained on its entrances. A wounded Islam is alive and convulsing in fevered spasms.

65 Zeestraat contains the Panorama Mesdag, the nineteenth century's answer to cyberspace. Tricks of light are harnessed to present a vast expanse of intricately painted, cunningly curved canvas, 360 degrees in the round. It presents, to the stunned eye, the seaside resort of Scheveningen, 1881 A.D. You stand on the center on a round wooden platform, a kind of faux-beachhouse, fenced in by railings; at your feet stretches an expanse of 100% real sand, studded with torn nets, rusting anchors, washed-up wooden shoes, fading cunningly into the canvas. This must surely be Reality--there's trash in it, it has to be real. The Panorama's false horizon will not sit still for the eye, warping in and out like a mescaline trip. Coal smoke hangs black and static from a dozen painted stacks, the bold ancestry of our current crimes against the atmosphere.

There used to be dozens of these monster Panoramas, in Paris, Hamburg, London. The Panorama is a dead medium, as dead as the stereograph, whose ungainly eye-gripping tin hood is now reborn as the head-wrapping Sony Watchmans of Jaron Lanier's Virtual Reality.

It all returns. The merchants and pilgrims of Ibn Battuta's flourishing Islam push their trade-routes farther, farther. Trade expands, populations swarm, laws and libraries grow larger and more refined. At length trade opens to an obscure corner of Siberia, where a certain species of rodent harbors a certain flea.

Ibn Battuta witnesses the result, without ever understanding it. June, 1348: travellers tell him of a virulent unknown disease raging in Gaza, a thousand people dying every day. Swellings appear in groin and neck and armpits, with nausea and delirium. If it takes to the lungs, the victim spits blood and dies

within hours. In the town of Homs, in Syria, Ibn Battuta is engulfed by the wave of Black Death. Three hundred die on the day of his arrival there.

In Damascus, two thousand are dying each day and the great polyglot metropolis has shuddered to a halt. The amirs, the sharifs, the judges, and all the classes of the Moslem people, have assembled in the Great Mosque to supplicate God. After a night of prayer, they march out at dawn, barefoot, the Holy Koran in their hands. And then:

"The entire population of the city joined in the exodus, male and female, small and large, the Jews went out with their book of the law and the Christians with their Gospel, their women and children with them; the whole concourse of them in tears and humble supplications, imploring the favor of God through His Books and His Prophets."

As the pestilence lurches from city to city, from mosque to caravanserai, the afflicted scatter in terror, carrying their fleas like pearls throughout the vast linked network of the civilized world. From China to the Atlantic coast, Ibn Battuta's world is one, and therefore terribly vulnerable. The Great Wall of China is no defense; and Europe's foremost traders, the cosmopolitan Genoans and Venetians, will ship a cargo of death throughout the Mediterranean. Paris, Barcelona, Valencia, Tunis, Cairo, Damascus, Aleppo

and Bordeaux will all suffer equal calamity in the dreadful spring and summer of 1348. Their scientific experts, those doctors who survive, will soberly

advise their patients to apply egg yolks to the buboes, wear magical amulets, and have their sickbeds strewn with fresh flowers.

Ibn Battuta is now forty-five. Perhaps unnerved by the plague, he decides to return home to Tangier after twenty-five years on the road. For a while, the

seasoned traveller outruns the horror, but it soon catches up with him again. When he reaches Tangier at last, the Death has come and gone. His father has been dead for fifteen years. But the plague has killed his aged mother. He misses her by mere months.

The havoc is unspeakable, beyond imagination. The Plague will return again in the next generation, and the next again, emptying cities and annihilating dynasties. The very landscape will change: irrigation canals will silt up, grass will grow over the trade-roads, forests will grow in old villages. It is Apocalypse.

Life will, nevertheless, go on. Civilization, pruned back bloodily and scourged by God Himself, refuses to collapse. History lurches under the blow, changes course--and moves on. A century of horror will fade, and, unguessed by anyone, a Renaissance beckons . . .

Ibn Battuta will meet a young Muslim poet from Spain, named Ibn Juzayy. Together they will compose a formal rihla of his travels. He works from memory-a vivid and well-trained memory, for Ibn Battuta, as a scholar of repute, can recite the entire Koran unaided, as well as many canons of the sacred law. Nevertheless, some poetic license will be taken, some episodes distorted, mis-remembered, or confused, some outright lies told. The great traveller will be regarded by many as a charlatan, or as a mere entertainer, a spinner of fantastic tales.

His Rihla will be little known until the nineteenth century, when European scholars discover it with astonishment and wonder.

CATSCAN #9

"Digital Dolphins in the Dance of Biz"

"It's the crystallization of a community!" the organizer exulted. He was a skinny, manic, handwaving guy, with a glittering eye and a sly toothy grin. He wore slacks, a zippered shirt of a color not found in nature, and a two-foot-tall novelty cowboy-hat, of bright purple felt, with a polka-dot hatband.

The "community" in question were computer game designers, swarming in a big roadside hotel in Silicon Valley, for four days in March 1991. There were close to four hundred of them. Time once again for "Computer Game Developers' Conference." This was the Fifth Annual gig, and the biggest one yet for "gaming professionals," and the best yet, maybe even the richest yet -- but, according to what I heard over the wine and cheese, it was somewhat less weird than the earlier ones. Almost dignified by contrast, almost professional. Some side-effect of all that "crystallization," presumably....

Five brief years ago, the very first such game-design conference had been conjoined in Chris Crawford's living room, and with room to spare. Mr. Crawford was the gentleman in the purple twenty-gallon hat.

I recognized the funny-hat syndrome. Made me feel right at home.

When I first met Damon Knight, at Clarion, this legendary SF critic, editor and organizer had shown up with a big white bushel-basket beard, half-a-dozen hollow plastic baseball bats, and great bounding bag full of rubber superballs, which he proceeded to fling into the hallways and whack with vim. Damon Knight, as a turbo-weirdo, a veritable ne plus ultra of cracked genre loon, does not even have to try to pass for normal. And neither does Chris Crawford. This is pretty much what genuine "power" and "influence" look like, in a milieu of creative lunatics.

Chris Crawford is founder of the gaming conference, author of three books and thirteen computer games, and the premier critic, theorist, and analyst for THE JOURNAL OF COMPUTER GAME DESIGN: "The finest periodical dedicated to computer game design -- the longest-running periodical dedicated to computer game design -- the ONLY periodical dedicated to computer game design!"

Computer gaming, like science fiction, has old roots; they even share a common ancestor in H.G. Wells, a great player of simulation war-games. But as a conscious profession, "computer game design" is only five years old.

Science fiction writing as a conscious profession dates back to Knight's founding of the Milford Conference in 1956, followed, almost ten leisurely years later, by his establishment of the SFWA. The metabolism of computer gaming is very swift. Science fiction writers are to computer game designers as mosasaurs are to dolphins.

So, I had arrived in San Jose at the functional equivalent of a SFWA gig. A neatly desktop-published programme announced, on page one, "Our Goals for the Conference:

* to foster information exchange among professionals in the computer game development industry,

* to strengthen the network of personal relationships in the computer game development community,

* to increase artistic and financial recognition for computer game developers, and

* to enhance the quality of entertainment software."

Instantly recognizable SFWA committeespeak -- people trying hard to sound like serious professionals. Let's hear those goals again, in actual English:

* to hang out and gossip;

* to meet old friends again;

* to try to figure out some way to make more money and fame from obstreperous publishers, crooked distributors, and other powerful sons-of-bitches; and, (last and conspicuously least)

* to kind of try and do a better job artistically.

Pretty much the same priorities as any Nebula gig.

The attendees were younger, different demographics than the SFWA, but then their pursuit is younger, too. They looked a little different: still mostly white guys, still mostly male, still mostly myopic, but much more of that weird computer-person vibe: the fuzzy Herman Melville beards, the middle-aged desk-spread that comes from punching deck sixty hours a week, whilst swilling endless Mountain Dews and Jolt Colas, in open console-cowboy contempt of mere human flesh and its metabolic need for exercise and nutrition... There were a few more bent engineers, more techies gone seriously dingo, than you'd see at any SFWA gig. And a faint but definite flavor of Hollywood: here and there, a few genuinely charismatic operators, hustlers, guys in sharp designer suits, and career gals who jog, and send faxes, and have carphones.

As a group, they're busily recapitulating arguments that SF had decades ago. The number one ideological struggle of CGDC '91 -- an actual panel debate, the best-attended and the liveliest -- concerned "depth of play versus presentation." Which is more important -- the fun of a game, its inherent qualities of play -- or, the grooviness of its graphics and sound, its production values? This debate is the local evolutionary equivalent of "Sense of Wonder" versus "Literary Excellence" and is just about as likely to be resolved.

And then there's the ever-popular struggle over terminology and definition. ("What Is Science Fiction?") What is a "computer-game?" Not just "videogames" certainly -- that's kid stuff ("sci-fi"). Even "Computer Games" is starting to sound rather musty and declasse', especially as the scope of our artistic effort is widening, so that games look less and less like "games," and more and more like rock videos or digitized short films. Maybe the industry would be better off if we forgot all about "games," and suavely referred to our efforts as "computer entertainment" ("speculative fiction").

And then there are the slogans and the artistic rules-of-thumb. "Simple, Hot, and Deep." A game should be "simple": easy to learn, without excess moving parts and irrelevant furbelows to burden the player's comprehension. It should be "hot" -- things should happen, the pace should not lag, it should avoid dead spots, and maintain interest of all players at all times. And it should be "deep" -- it should be able to absorb as much strategic ingenuity as the player is willing to invest; there should be layer after layer of subtlety; it should repay serious adult attention. "An hour to learn, a lifetime to master."

And: "Throw the first one away." Game design is an iterative process. Games should be hammered into shape, tested, hammered again, tested again. The final product may bear as little relation to the original "idea" as the average Hollywood film does to the shooting script. Good game-testers can be as vital and useful as good editors in fiction; probably more so. There are other issues of artistic expression. There is, for instance, censorship, both external, and self-imposed. Young kids like computer games; even quite sophisticated games end up in the hands of little kids, and are designed accordingly. The game "Maniac Mansion" was pulled from the shelves of the Toys-R-Us chain because (horror) it had the word "lust" on the box!

"Hidden Agenda" is a very innovative and highly politicized simulation game, in which the player must take the role of President of a small and turbulent Central American country, menaced by internal violence and Cold War geopolitics. "Hidden Agenda" is universally admired, but had a hard time finding a publisher.

There was an earnest panel on ethics in graphic violence. When a villain is shot in a game, should the designer incorporate digitized blood and guts in the scene? Some game designers feel quite disturbed about "the Nintendo War" in the Gulf, in much the way that some SF writers felt, some years back, about the advent of Reagan's "Star Wars." "Space exploration" had seemed a noble thing, until the prospective advent of orbital x-ray laser fortresses. Was this what all our shiny rocket ships were supposed to be about, in the end? Now game designers feel a similar sneaking guilt and a similar sense of betrayal, suspecting that videogames have in fact cheapened violence, and made inflicting death-by-computer seem a fine occupation for American youth. It seems perfectly fine to kill "enemies" with cybernetic air-strikes, as long as their blood doesn't actually splatter the VDT screen...

And then there's pornography, already present in the burgeoning CD-ROM market. If you're playing strip-poker with a virtual digitized Playboy-model, is that harmless fun-for-guys stuff, with nobody exploited, nobody hurt? Or is it some kind of (gulp) hideously oppressive dehumanized computer-assisted sex-objectification?

And then, of course, there's business. Biz. Brass tacks. Your average game designer makes rather more than your average SFWA member. It's still not a living wage. The gamers have to work harder, they have more specialized skills, they have less creative control, and the pace is murderous. Sixty-hour-weeks are standard in the industry, and there's no such thing as a "no-layoffs" policy in the software biz. Everybody wants to hire a hard-working, technically talented enthusiast; having found such a person, it is standard to put him on the "burnout track" and work him to collapse in five years flat, leaving the staggering husk to limp away from "entertainment" to try and find a straight job someplace, maintaining C code.

As "professionalism" spreads its pinstriped tentacles, the pioneers and the lone wolves are going to the wall. There is "consolidation" in the industry, that same sinister development that has led written SF deeper and deeper into the toils of gigantic multinational publishing cartels and malignant bookstore chains. "Software chains" have sprung up: Babbage's, Electronic Boutique, Walden Software, Soft Warehouse, Egghead. The big game publishers are getting bigger, the modes of publishing and distribution are calcifying and walling-out the individual entrepreneur.

"Sequelism" is incredibly common; computer gaming builds off established hits even more shamelessly than SF's nine-part trilogy-trilogies. And "games" in general are becoming more elaborate: larger teams of specialized workers tackling pixel animation, soundtrack, box design; more and more man-hours invested into the product, by companies that now look less like young Walt Disney drawing in a tabletop in Kansas, and much more like old Walt Disney smoking dollar cigars in Hollywood. It's harder and harder for a single creative individual, coming from outside, to impose his vision on the medium.

Some regard this development as a healthy step up the ladder to the Real Money: Lucasfilm Games, for instance, naturally wants to be more like its parent Lucasfilm, and the same goes for Walt Disney Computer.

But others suspect that computer-gaming may suffer artistically (and eventually financially) by trying to do too much for too many. Betty Boop cartoons were simple and cheap, but were tremendously popular at the time of their creation, and are still cherished today. Fleischer Studios came a cropper when they tried to go for full-animation feature films, releasing bloated, overproduced bombs like GULLIVER that tried and failed to appeal to a mass audience.

And then there is The Beast Men Call 'Prodigy.' Prodigy is a national computer network that has already absorbed nine hundred million dollars of start-up money from IBM and Sears. Prodigy is, in short, a Major Player. In the world of computer gaming, $900,000,000 is the functional equivalent of nuclear superpower status. And Prodigy is interested in serious big-time "computer entertainment." Prodigy must win major big-time participation by straight people, by computer illiterates. To survive, it must win an entirely new and unprecedently large popular audience.

And Prodigy was at the gaming conference to get the word out. Prodigy subscribers play twelve thousand games of "Chief Executive Officer" every day! What Prodigy wants is, well, the patronage of Normal People. Nothing offensive, nothing too wacky, nothing too weird. They want to be the Disney Channel of Cyberspace. They want entirely new kinds of computer games. Games that look and smell like primetime TV, basically. A crisply dressed Prodigy representative strongly urged game-designers present to "lose the Halloween costumes." Forget "the space stuff" and "the knights in armor." Prodigy wants games normal folks will play, something that reflects general American experience. Something like... hmmm... "a high school popularity contest."

The audience seemed stunned. Scarcely a human being among them, of either sex, could have ever won a high school popularity contest. If they'd ever been "popular," they would never have spent so much time in front of computers. They would have been out playing halfback, or getting laid, or doing other cool high-school things -- doing anything but learning how to program. Not only were they stunned, but they rather resented the suggestion; the notion that, after years of trying to be Frank Frazetta, they were suddenly to become Norman Rockwell. I heard sullen mutterings later about "Ozzie and Harriet Prodigy droids."

And yet -- this may well be The Future for "computer entertainment." Why the hell does prime-time TV look as bad and stupid as it does? There are very good reasons for this; it's not any kind of accident. And Prodigy understands those reasons as well or better than any wacko gamedesigner in a big purple hat.

Bleak as this future prospect may seem, there was no lack of optimism, the usual ecstatic vaporware common to any business meeting of "computer people." Computer game designers have their faces turned resolutely to the future; they have little in the way of "classics." Their grails are all to come, on the vast resistless wings of technological advance.

At the moment, "interactive characters" in games, characters that behave realistically, without scripts, and challenge or befriend the player, are primitive and scarcely workable constructs. But wait till we get Artificial Intelligence! Then we'll build characters who can carry out dramas all by themselves!!

And games are becoming fatter and more elaborate; so much so that the standard money-making target machine, the cheap IBM-PC clone with the 16-bit 8088 chip running at five megahertz, is almost unable to hold them. Origin's state-of-the-art "Wing Commander" game can take up half a hard disk. But bigger machines are coming soon. Faster, with much better graphics. Digital sound as good as stereos, and screens better than TV! Cheap, too!

And then there's CD-ROM. Software, recorded on a shiny compact disk, instead of bloated floppies and clunking hard disks. You can put fifteen hundred (1500!) Nintendo cartridge games onto one compact disk -- and it costs only a dollar to make! Holy Cow!

The industry is tough and hardened now. It survived the Great Crash of 1984, which had once seemed the end of everything. It's crewed by hardy veterans. And just look at that history! Why, twenty years ago there was nothing here at all; now computer entertainment's worth millions! Kids with computers don't do anything much with them at all, except play games -- and their parents would admit the same thing, if they told the truth. And in the future -- huge games, involving thousands of people, on vast modem-linked networks! Of course, those networks may look much like, well, Prodigy....

But even without networks, the next generation of PCs will be a thing of dazzlement. Of course, most everything written for the old PC's, and for Macs and Amigas and such, will be unceremoniously junked, along with the old PC's themselves. Thousands of games... thousands of man-hours of labor and design... erased from human memory, a kind of cultural apocalypse...

Everything simply gone, flung out in huge beige plastic heaps like junked cars. Dead tech.

But perhaps "cultural apocalypse" is overstating matters. Who cares if people throw away a bunch of obsolete computers? After all, they're obsolete. So what if you lose all the software, too? After all, it's just outdated software. They're just games. It's not like they're real art.

And there's the sting.

A sting one should remember, and mull upon, when one hears of proposals to digitize the novel. The Sony reader, for instance. A little hand-held jobby, much like its kissing cousin the Nintendo Game Boy, but with a print-legible screen.

Truck down to the local Walden Software, and you buy the local sword-and-planet trilogy right on a disk! Probably has a game tie-in, too: read the book; play the game!

And why stop there? After all, you've got all this digital processing- power going to waste.... Have it be an illustrated book! Illustrated with animated sequences! And wait -- this book has a soundtrack! What genius! Now even the stupidest kid in the block is gonna want to learn to read. It's a techical fix for the problem of withering literature!

And think -- you could put a hundred SF books on a compact disk for a buck! If they're public domain books.... Still, if there's enough money in it, you can probably change the old-fashioned literary copyright laws in your favor. Failing that, do it in Taiwan or Thailand or Hong Kong, where software piracy is already deeply embedded in the structure of business. (Hong Kong pirates can steal a computer game, crack the software protection, and photocopy the rules and counters, and sell it all back to the US in a ziplock baggie, in a week flat. Someday soon books will be treated like this!) Digital Books for the Information Age -- books that aspire to the exalted condition of software! In the, well, "cultural logic of postmodern capitalism," all our art wants to be digital now. First, so you can have it. Replicate it. Reproduce it, without loss of fidelity. And, second -- and this is the hidden agenda -- so you can throw it away. And never have to look at it again.

How long will the first generation of "reading-machines" last? As long as the now utterly moribund Atari 400 game machine? Possibly. Probably not. If you write a "book" for any game machine -- if you write a book that is software -- you had better be prepared to live as game software people live, and think as game software people think, and survive as game software people survive.

And they're pretty smart people really. Good fun to hang out with. Those who work for companies are being pitilessly worked to death. Those who work for themselves are working themselves to death, and, without exception, they all have six or seven different ways of eking out a living in the crannies of silicon culture. Those who own successful companies, and those who write major hits, are millionaires. This doesn't slow down their workaholic drive though; it only means they get bigger and nicer toys. They're very bright, unbelievably hard-working, very put-upon; fast on their feet, enamored of gambling... and with a sadly short artistic lifespan. And they're different. Very different. Digital dolphins in their dance of biz -- not like us print-era mosasaurs.

Want a look at what it would be like? Read THE JOURNAL OF COMPUTER GAME DESIGN (5251 Sierra Road, San Jose, CA 95132 -- $30/six issues per year). It's worth a good long look. It repays close attention.

And don't say I didn't warn you.

Catscan 10

From SCIENCE FICTION EYE #10

A Statement Of Principle

I just wrote my first nonfiction book. It's called THE HACKER CRACKDOWN: LAW AND DISORDER ON THE ELECTRONIC FRONTIER. Writing this book has required me to spend much of the past year and a half in the company of hackers, cops, and civil libertarians.

I've spent much time listening to arguments over what's legal, what's illegal, what's right and wrong, what's decent and what's despicable, what's moral and immoral, in the world of computers and civil liberties. My various informants were knowledgeable people who cared passionately about these issues, and most of them seemed well-intentioned. Considered as a whole, however, their opinions were a baffling mess of contradictions.

When I started this project, my ignorance of the issues involved was genuine and profound. I'd never knowingly met anyone from the computer underground. I'd never logged-on to an underground bulletin- board or read a semilegal hacker magazine. Although I did care a great deal about the issue of freedom of expression, I knew sadly little about the history of civil rights in America or the legal doctrines that surround freedom of the press, freedom of speech, and freedom of association. My relations with the police were firmly based on the stratagem of avoiding personal contact with police to the greatest extent possible.

I didn't go looking for this project. This project came looking for me. I became inextricably involved when agents of the United States Secret Service, acting under the guidance of federal attorneys from Chicago, came to my home town of Austin on March 1, 1990, and confiscated the computers of a local science fiction gaming publisher. Steve Jackson Games, Inc., of Austin, was about to publish a gaming-book called GURPS Cyberpunk.

When the federal law-enforcement agents discovered the electronic manuscript of CYBERPUNK on the computers they had seized from Mr. Jackson's offices, they expressed grave shock and alarm. They declared that CYBERPUNK was "a manual for computer crime."

It's not my intention to reprise the story of the Jackson case in this column. I've done that to the best of my ability in THE HACKER CRACKDOWN; and in any case the ramifications of March 1 are far from over. Mr Jackson was never charged with any crime. His civil suit against the raiders is still in federal court as I write this.

I don't want to repeat here what some cops believe, what some hackers believe, or what some civil libertarians believe. Instead, I want to discuss my own moral beliefs as a science fiction writer -- such as they are. As an SF writer, I want to attempt a personal statement of principle.

It has not escaped my attention that there are many people who believe that anyone called a "cyberpunk" must be, almost by definition, entirely devoid of principle. I offer as evidence an excerpt from Buck BloomBecker's 1990 book, SPECTACULAR COMPUTER CRIMES. On page 53, in a chapter titled "Who Are The Computer Criminals?", Mr. BloomBecker introduces the formal classification of "cyberpunk" criminality.

"In the last few years, a new genre of science fiction has arisen under the evocative name of 'cyberpunk.' Introduced in the work of William Gibson, particularly in his prize-winning novel NEUROMANCER, cyberpunk takes an apocalyptic view of the technological future. In NEUROMANCER, the protagonist is a futuristic hacker who must use the most sophisticated computer strategies to commit crimes for people who offer him enough money to buy the biological creations he needs to survive. His life is one of cynical despair, fueled by the desire to avoid death. Though none of the virus cases actually seen so far have been so devastating, this book certainly represents an attitude that should be watched for when we find new cases of computer virus and try to understand the motivations behind them.

"The New York Times's John Markoff, one of the more perceptive and accomplished writers in the field, has written than a number of computer criminals demonstrate new levels of meanness. He characterizes them, as do I, as cyberpunks."

Those of us who have read Gibson's NEUROMANCER closely will be aware of certain factual inaccuracies in Mr. BloomBecker's brief review. NEUROMANCER is not "apocalyptic." The chief conspirator in NEUROMANCER forces Case's loyalty, not by buying his services, but by planting poison-sacs in his brain. Case is "fueled" not by his greed for money or "biological creations," or even by the cynical "desire to avoid death," but rather by his burning desire to hack cyberspace. And so forth.

However, I don't think this misreading of NEUROMANCER is based on carelessness or malice. The rest of Mr. BloomBecker's book generally is informative, well-organized, and thoughtful. Instead, I feel that Mr. BloomBecker manfully absorbed as much of NEUROMANCER as he could without suffering a mental toxic reaction. This report of his is what he actually *saw* when reading the novel.

NEUROMANCER has won quite a following in the world of computer crime investigation. A prominent law enforcement official once told me that police unfailingly conclude the worst when they find a teenager with a computer and a copy of NEUROMANCER. When I declared that I too was a "cyberpunk" writer, she asked me if I would print the recipe for a pipe-bomb in my works. I was astonished by this question, which struck me as bizarre rhetorical excess at the time. That was before I had actually examined bulletin-boards in the computer underground, which I found to be chock-a-block with recipes for pipe-bombs, and worse. (I didn't have the heart to tell her that my friend and colleague Walter Jon Williams had once written and published an SF story closely describing explosives derived from simple household chemicals.)

Cyberpunk SF (along with SF in general) has, in fact, permeated the computer underground. I have met young underground hackers who use the aliases "Neuromancer," "Wintermute" and "Count Zero." The Legion of Doom, the absolute bete noire of computer law-enforcement, used to congregate on a bulletin-board called "Black Ice."

In the past, I didn't know much about anyone in the underground, but they certainly knew about me. Since that time, I've had people express sincere admiration for my novels, and then, in almost the same breath, brag to me about breaking into hospital computers to chortle over confidential medical reports about herpes victims.

The single most stinging example of this syndrome is "Pengo," a member of the German hacker-group that broke into Internet computers while in the pay of the KGB. He told German police, and the judge at the trial of his co-conspirators, that he was inspired by NEUROMANCER and John Brunner's SHOCKWAVE RIDER.

I didn't write NEUROMANCER. I did, however, read it in manuscript and offered many purportedly helpful comments. I praised the book publicly and repeatedly and at length. I've done everything I can to get people to read this book.

I don't recall cautioning Gibson that his novel might lead to anarchist hackers selling their expertise to the ferocious and repulsive apparat that gave the world the Lubyanka and the Gulag Archipelago. I don't think I could have issued any such caution, even if I'd felt the danger of such a possibility, which I didn't. I still don't know in what fashion Gibson might have changed his book to avoid inciting evildoers, while still retaining the integrity of his vision -- the very quality about the book that makes it compelling and worthwhile.

This leads me to my first statements of moral principle.

As a "cyberpunk" SF writer, I am not responsible for every act committed by a Bohemian with a computer. I don't own the word "cyberpunk" and cannot help where it is bestowed, or who uses it, or to what ends.

As a science fiction writer, it is not my business to make people behave. It is my business to make people imagine. I cannot control other people's imaginations -- any more than I would allow them to control mine.

I am, however, morally obliged to speak out when acts of evil are committed that use my ideas or my rhetoric, however distantly, as a justification.

Pengo and his friends committed a grave crime that was worthy of condemnation and punishment. They were clever, but treacherously clever. They were imaginative, but it was imagination in a bad cause. They were technically accomplished, but they abused their expertise for illicit profit and to feed their egos. They may be "cyberpunks" -- according to many, they may deserve that title far more than I do - but they're no friends of mine.

What is "crime"? What is a moral offense? What actions are evil and dishonorable? I find these extraordinarily difficult questions. I have no special status that should allow me to speak with authority on such subjects. Quite the contrary. As a writer in a scorned popular literature and a self-professed eccentric Bohemian, I have next to no authority of any kind. I'm not a moralist, philosopher, or prophet. I've always considered my "moral role," such as it is, to be that of a court jester -- a person sometimes allowed to speak the unspeakable, to explore ideas and issues in a format where they can be treated as games, thought-experiments, or metaphors, not as prescriptions, laws, or sermons.

I have no religion, no sacred scripture to guide my actions and provide an infallible moral bedrock. I'm not seeking political responsibilities or the power of public office. I habitually question any pronouncement of authority, and entertain the liveliest skepticism about the processes of law and justice. I feel no urge to conform to the behavior of the majority of my fellow citizens. I'm a pain in the neck.

My behavior is far from flawless. I lived and thrived in Austin, Texas in the 1970s and 1980s, in a festering milieu of arty crypto- intellectual hippies. I've committed countless "crimes," like millions of other people in my generation. These crimes were of the glamorous "victimless" variety, but they would surely have served to put me in prison had I done them, say, in front of the State Legislature.

Had I lived a hundred years ago as I live today, I would probably have been lynched by outraged fellow Texans as a moral abomination. If I lived in Iran today and wrote and thought as I do, I would probably be tried and executed.

As far as I can tell, moral relativism is a fact of life. I think it might be possible to outwardly conform to every jot and tittle of the taboos of one's society, while feeling no emotional or intellectual commitment to them. I understand that certain philosophers have argued that this is morally proper behavior for a good citizen. But I can't live that life. I feel, sincerely, that my society is engaged in many actions which are foolish and shortsighted and likely to lead to our destruction. I feel that our society must change, and change radically, in a process that will cause great damage to our present system of values. This doesn't excuse my own failings, which I regret, but it does explain, I hope, why my lifestyle and my actions are not likely to make authority feel entirely comfortable.

Knowledge is power. The rise of computer networking, of the Information Society, is doing strange and disruptive things to the processes by which power and knowledge are currently distributed. Knowledge and information, supplied through these new conduits, are highly corrosive to the status quo. People living in the midst of technological revolution are living outside the law: not necessarily because they mean to break laws, but because the laws are vague, obsolete, overbroad, draconian, or unenforceable. Hackers break laws as a matter of course, and some have been punished unduly for relatively minor infractions not motivated by malice. Even computer police, seeking earnestly to apprehend and punish wrongdoers, have been accused of abuse of their offices, and of violation of the Constitution and the civil statutes. These police may indeed have committed these "crimes." Some officials have already suffered grave damage to their reputations and careers -- all the time convinced that they were morally in the right; and, like the hackers they pursued, never feeling any genuine sense of shame, remorse, or guilt.

I have lived, and still live, in a counterculture, with its own system of values. Counterculture -- Bohemia -- is never far from criminality. "To live outside the law you must be honest" was Bob Dylan's classic hippie motto. A Bohemian finds romance in the notion that "his clothes are dirty but his hands are clean." But there's danger in setting aside the strictures of the law to linchpin one's honor on one's personal integrity. If you throw away the rulebook to rely on your individual conscience you will be put in the way of temptation.

And temptation is a burden. It hurts. It is grotesquely easy to justify, to rationalize, an action of which one should properly be ashamed. In investigating the milieu of computer-crime I have come into contact with a world of temptation formerly closed to me. Nowadays, it would take no great effort on my part to break into computers, to steal long-distance telephone service, to ingratiate myself with people who would merrily supply me with huge amounts of illicitly copied software. I could even build pipe-bombs. I haven't done these things, and disapprove of them; in fact, having come to know these practices better than I cared to, I feel sincere revulsion for them now. But this knowledge is a kind of power, and power is tempting. Journalistic objectivity, or the urge to play with ideas, cannot entirely protect you. Temptation clings to the mind like a series of small but nagging weights. Carrying these weights may make you stronger. Or they may drag you down.

"His clothes are dirty but his hands are clean." It's a fine ideal, when you can live up to it. Like a lot of Bohemians, I've gazed with a fine disdain on certain people in power whose clothes were clean but their hands conspicuously dirty. But I've also met a few people eager to pat me on the back, whose clothes were dirty and their hands as well. They're not pleasant company.

Somehow one must draw a line. I'm not very good at drawing lines. When other people have drawn me a line, I've generally been quite anxious to have a good long contemplative look at the other side. I don't feel much confidence in my ability to draw these lines. But I feel that I should. The world won't wait. It only took a few guys with poolcues and switchblades to turn Woodstock Nation into Altamont. Haight-Ashbury was once full of people who could trust anyone they'd smoked grass with and love anyone they'd dropped acid with -- for about six months. Soon the place was aswarm with speed-freaks and junkies, and heaven help us if they didn't look just like the love-bead dudes from the League of Spiritual Discovery. Corruption exists, temptation exists. Some people fall. And the temptation is there for all of us, all the time.

I've come to draw a line at money. It's not a good line, but it's something. There are certain activities that are unorthodox, dubious, illegal or quasi-legal, but they might perhaps be justified by an honest person with unconventional standards. But in my opinion, when you're making a commercial living from breaking the law, you're beyond the pale. I find it hard to accept your countercultural sincerity when you're grinning and pocketing the cash, compadre.

I can understand a kid swiping phone service when he's broke, powerless, and dying to explore the new world of the networks. I don't approve of this, but I can understand it. I scorn to do this myself, and I never have; but I don't find it so heinous that it deserves pitiless repression. But if you're stealing phone service and selling it -- if you've made yourself a miniature phone company and you're pimping off the energy of others just to line your own pockets -- you're a thief. When the heat comes to put you away, don't come crying "brother" to me.

If you're creating software and giving it away, you're a fine human being. If you're writing software and letting other people copy it and try it out as shareware, I appreciate your sense of trust, and if I like your work, I'll pay you. If you're copying other people's software and giving it away, you're damaging other people's interests, and should be ashamed, even if you're posing as a glamorous info-liberating subversive. But if you're copying other people's software and selling it, you're a crook and I despise you.

Writing and spreading viruses is a vile, hurtful, and shameful activity that I unreservedly condemn.

There's something wrong with the Information Society. There's something wrong with the idea that "information" is a commodity like a desk or a chair. There's something wrong with patenting software algorithms. There's something direly meanspirited and ungenerous about inventing a language and then renting it out to other people to speak. There's something unprecedented and sinister in this process of creeping commodification of data and knowledge. A computer is something too close to the human brain for me to rest entirely content with someone patenting or copyrighting the process of its thought. There's something sick and unworkable about an economic system which has already spewed forth such a vast black market. I don't think democracy will thrive in a milieu where vast empires of data are encrypted, restricted, proprietary, confidential, top secret, and sensitive. I fear for the stability of a society that builds sandcastles out of databits and tries to stop a real-world tide with royal commands.

Whole societies can fall. In Eastern Europe we have seen whole nations collapse in a slough of corruption. In pursuit of their unworkable economic doctrine, the Marxists doubled and redoubled their efforts at social control, while losing all sight of the values that make life worth living. At last the entire power structure was so discredited that the last remaining shred of moral integrity could only be found in Bohemia: in dissidents and dramatists and their illegal samizdat underground fanzines. Their clothes were dirty but their hands were clean. The only agitprop poster Vaclav Havel needed was a sign saying *Vaclav Havel Guarantees Free Elections.* He'd never held power, but people believed him, and they believed his Velvet Revolution friends.

I wish there were people in the Computer Revolution who could inspire, and deserved to inspire, that level of trust. I wish there were people in the Electronic Frontier whose moral integrity unquestionably matched the unleashed power of those digital machines. A society is in dire straits when it puts its Bohemia in power. I tremble for my country when I contemplate this prospect. And yet it's possible. If dire straits come, it can even be the last best hope.

The issues that enmeshed me in 1990 are not going to go away. I became involved as a writer and journalist, because I felt it was right. Having made that decision, I intend to stand by my commitment. I expect to stay involved in these issues, in this debate, for the rest of my life. These are timeless issues: civil rights, knowledge, power, freedom and privacy, the necessary steps that a civilized society must take to protect itself from criminals. There is no finality in politics; it creates itself anew, it must be dealt with every day.

The future is a dark road and our speed is headlong. I didn't ask for power or responsibility. I'm a science fiction writer, I only wanted to play with Big Ideas in my cheerfully lunatic sandbox. What little benefit I myself can contribute to society would likely be best employed in writing better SF novels. I intend to write those better novels, if I can. But in the meantime I seem to have accumulated a few odd shreds of influence. It's a very minor kind of power, and doubtless more than I deserve; but power without responsibility is a monstrous thing.

In writing HACKER CRACKDOWN, I tried to describe the truth as other people saw it. I see it too, with my own eyes, but I can't yet pretend to understand what I'm seeing. The best I can do, it seems to me, is to try to approach the situation as an open-minded person of goodwill. I therefore offer the following final set of principles, which I hope will guide me in the days to come.

I'll listen to anybody, and I'll try to imagine myself in their situation.

I'll assume goodwill on the part of others until they fully earn my distrust.

I won't cherish grudges. I'll forgive those who change their minds and actions, just as I reserve the right to change my own mind and actions.

I'll look hard for the disadvantages to others, in the things that give me advantage. I won't assume that the way I live today is the natural order of the universe, just because I happen to be benefiting from it at the moment.

And while I don't plan to give up making money from my ethically dubious cyberpunk activities, I hope to temper my impropriety by giving more work away for no money at all.

Sixth INTERZONE column

"Cyberpunk in the Nineties"

This is my sixth and last column for INTERZONE, as I promised a

year ago when I began this series. I've enjoyed doing these pieces,

and would like to thank the energetic editor and indulgent readership

of INTERZONE. A special thanks to those who contributed terms and

comments for "The SF Workshop Lexicon," which remains an ongoing

project, and will show up again someday, probably in embarrassing

company. Those readers who had enough smarts and gumption to buy

the SIGNAL catalog (see column one in issue 37) have been well

rewarded, I trust.

In this final column, I would like to talk frankly about

"cyberpunk" -- not cyberpunk the synonym for computer criminal, but

Cyberpunk the literary movement.

Years ago, in the chilly winter of 1985 -- (we used to have chilly

winters then, back before the ozone gave out) -- an article appeared in

INTERZONE #14, called "The New Science Fiction." "The New Science

Fiction" was the first manifesto of "the cyberpunk movement." The

article was an analysis of the SF genre's history and principles; the

word "cyberpunk" did not appear in it at all. "The New SF" appeared

pseudonymously in a British SF quarterly whose tiny circulation did

not restrain its vaulting ambitions. To the joy of dozens, it had

recently graduated to full-colour covers. A lovely spot for a

manifesto.

Let's compare this humble advent to a recent article,

"Confessions of an Ex-Cyberpunk," by my friend and colleague Mr.

Lewis Shiner. This piece is yet another honest attempt by Someone

Who Was There to declare cyberpunk dead. Shiner's article appeared

on Jan 7, 1991, in the editorial page of THE NEW YORK TIMES.

Again an apt venue, one supposes, but illustrative of the

paradoxical hazards of "movements." An avalanche, started with a

shout and a shove somewhere up at the timberline, cannot be stopped

again with one's hands, even with an audience of millions of mundanes.

"Cyberpunk," before it acquired its handy label and its sinister

rep, was a generous, open-handed effort, very street-level and

anarchic, with a do-it-yourself attitude, an ethos it shared with garage-

band 70s punk music. Cyberpunk's one-page propaganda organ,

"CHEAP TRUTH," was given away free to anyone who asked for it.

CHEAP TRUTH was never copyrighted; photocopy "piracy" was actively

encouraged.

CHEAP TRUTH's contributors were always pseudonymous, an

earnest egalitarian attempt to avoid any personality-cultism or

cliquishness. CHEAP TRUTH deliberately mocked established "genre

gurus" and urged every soul within earshot to boot up a word-

processor and join the cause. CT's ingenuous standards for SF were

simply that SF should be "good" and "alive" and "readable." But when

put in practice, these supposed qualities were something else again.

The fog of battle obscured a great deal at the time.

CHEAP TRUTH had rather mixed success. We had a laudable

grasp of the basics: for instance, that SF writers ought to *work a lot

harder* and *knock it off with the worn-out bullshit* if they expected

to earn any real respect. Most folks agreed that this was a fine

prescription -- for somebody else. In SF it has always been fatally

easy to shrug off such truisms to dwell on the trivialities of SF as a

career: the daily grind in the Old Baloney Factory. Snappy

cyberpunk slogans like "imaginative concentration" and "technological

literacy" were met with much the same indifference. Alas, if

preaching gospel was enough to reform the genre, the earth would

surely have quaked when Aldiss and Knight espoused much the same

ideals in 1956.

SF's struggle for quality was indeed old news, except to CHEAP

TRUTH, whose writers were simply too young and parochial to have

caught on. But the cultural terrain had changed, and that made a lot

of difference. Honest "technological literacy" in the 50s was

exhilirating but disquieting -- but in the high-tech 80s, "technological

literacy" meant outright *ecstasy and dread.* Cyberpunk was *weird,*

which obscured the basic simplicity of its theory-and-practice.

When "cyberpunk writers" began to attract real notoriety, the

idea of cyberpunk principles, open and available to anyone, was lost

in the murk. Cyberpunk was an instant cult, probably the very

definition of a cult in modern SF. Even generational contemporaries,

who sympathized with much CHEAP TRUTH rhetoric, came to distrust

the cult itself -- simply because the Cyberpunks had become "genre

gurus" themselves.

It takes shockingly little, really, to become a genre guru.

Basically, it's as easy as turning over in bed. It's questionable whether

one gains much by the effort. Preach your fool head off, but who

trusts gurus, anyway? CHEAP TRUTH never did! All in all, it took

about three years to thoroughly hoist the Movement on its own petard.

CHEAP TRUTH was killed off in 1986.

I would like to think that this should be a lesson to somebody

out there. I very much doubt it, though.

Rucker, Shiner, Sterling, Shirley and Gibson -- the Movement's

most fearsome "gurus," ear-tagged yet again in Shiner's worthy article,

in front of the N. Y. TIMES' bemused millions -- are "cyberpunks" for

good and all. Other cyberpunks, such as the six other worthy

contributors to MIRRORSHADES THE CYBERPUNK ANTHOLOGY, may be

able to come to their own terms with the beast, more or less. But the

dreaded C-Word will surely be chiselled into our five tombstones.

Public disavowals are useless, very likely *worse* than useless. Even

the most sweeping changes in our philosophy of writing, perhaps weird

mid-life-crisis conversions to Islam or Santeria, could not erase the

tattoo.

Seen from this perspective, "cyberpunk" simply means "anything

cyberpunks write." And that covers a lot of ground. I've always had a

weakness for historical fantasies, myself, and Shiner writes

mainstream novels and mysteries. Shirley writes horror. Rucker was

last seen somewhere inside the Hollow Earth. William Gibson,

shockingly, has been known to write funny short stories. All this

means nothing. "Cyberpunk" will not be conclusively "dead" until the

last of us is shovelled under. Demographics suggest that this is likely

to take some time.

CHEAP TRUTH's promulgation of open principles was of dubious

use -- even when backed by the might of INTERZONE. Perhaps

"principles" were simply too foggy and abstract, too arcane and

unapproachable, as opposed to easy C-word recognition symbols, like

cranial jacks, black leather jeans and amphetamine addiction. But

even now, it may not be too late to offer a concrete example of the

genuine cyberpunk *weltanschauung* at work.

Consider FRANKENSTEIN by Mary Shelley, a wellspring of

science fiction as a genre. In a cyberpunk analysis, FRANKENSTEIN is

"Humanist" SF. FRANKENSTEIN promotes the romantic dictum that

there are Some Things Man Was Not Meant to Know. There are no

mere physical mechanisms for this higher moral law -- its workings

transcend mortal understanding, it is something akin to divine will.

Hubris must meet nemesis; this is simply the nature of our universe.

Dr. Frankenstein commits a spine-chilling transgression, an affront

against the human soul, and with memorable poetic justice, he is direly

punished by his own creation, the Monster.

Now imagine a cyberpunk version of FRANKENSTEIN. In this

imaginary work, the Monster would likely be the well-funded R&D

team-project of some global corporation. The Monster might well

wreak bloody havoc, most likely on random passers-by. But having

done so, he would never have been allowed to wander to the North

Pole, uttering Byronic profundities. The Monsters of cyberpunk never

vanish so conveniently. They are already loose on the streets. They

are next to us. Quite likely *WE* are them. The Monster would have

been copyrighted through the new genetics laws, and manufactured

worldwide in many thousands. Soon the Monsters would all have

lousy night jobs mopping up at fast-food restaurants.

In the moral universe of cyberpunk, we *already* know Things

We Were Not Meant To Know. Our *grandparents* knew these things;

Robert Oppenheimer at Los Alamos became the Destroyer of Worlds

long before we arrived on the scene. In cyberpunk, the idea that there

are sacred limits to human action is simply a delusion. There are no

sacred boundaries to protect us from ourselves.

Our place in the universe is basically accidental. We are weak

and mortal, but it's not the holy will of the gods; it's just the way

things happen to be at the moment. And this is radically

unsatisfactory; not because we direly miss the shelter of the Deity, but

because, looked at objectively, the vale of human suffering is basically

a dump. The human condition can be changed, and it will be changed,

and is changing; the only real questions are how, and to what end.

This "anti-humanist" conviction in cyberpunk is not simply

some literary stunt to outrage the bourgeoisie; this is an objective fact

about culture in the late twentieth century. Cyberpunk didn't invent

this situation; it just reflects it.

Today it is quite common to see tenured scientists espousing

horrifically radical ideas: nanotechnology, artificial intelligence, cryonic

suspension of the dead, downloading the contents of the brain...

Hubristic mania is loose in the halls of academe, where everybody and

his sister seems to have a plan to set the cosmos on its ear. Stern

moral indignation at the prospect is the weakest of reeds; if there were

a devilish drug around that could extend our sacred God-given

lifespans by a hundred years, the Pope would be the first in line.

We already live, every day, through the means of outrageous

actions with unforeseeable consequences to the whole world. The

world population has doubled since 1970; the natural world, which

used to surround humankind with its vast Gothic silences, is now

something that has to be catalogued and cherished.

We're just not much good any more at refusing things because

they don't seem proper. As a society, we can't even manage to turn

our backs on abysmal threats like heroin and the hydrogen bomb. As

a culture, we love to play with fire, just for the sake of its allure; and if

there happens to be money in it, there are no holds barred.

Jumpstarting Mary Shelley's corpses is the least of our problems;

something much along that line happens in intensive-care wards every

day.

Human thought itself, in its unprecedented guise as computer

software, is becoming something to be crystallized, replicated, made a

commodity. Even the insides of our brains aren't sacred; on the

contrary, the human brain is a primary target of increasingly

successful research, ontological and spiritual questions be damned.

The idea that, under these circumstances, Human Nature is somehow

destined to prevail against the Great Machine, is simply silly; it seems

weirdly beside the point. It's as if a rodent philosopher in a lab-cage,

about to have his brain bored and wired for the edification of Big

Science, were to piously declare that in the end Rodent Nature must

triumph.

Anything that can be done to a rat can be done to a human

being. And we can do most anything to rats. This is a hard thing to

think about, but it's the truth. It won't go away because we cover our

eyes.

*This* is cyberpunk.

This explains, I hope, why standard sci-fi adventure yarns

tarted up in black leather fail to qualify. Lewis Shiner has simply lost

patience with writers who offer dopey shoot-em-up rack-fodder in sci-

fiberpunk drag. "Other writers had turned the form into formula," he

complains in THE NEW YORK TIMES, "the same dead-end thrills we get

from video games and blockbuster movies." Shiner's early convictions

have scarcely budged so much as a micron -- but the stuff most folks

call "cyberpunk" no longer reflects his ideals.

In my opinion the derivative piffle is a minor issue. So is the

word "cyberpunk." I'm pleased to see that it's increasingly difficult to

write a dirt-stupid book, put the word "cyberpunk" on it, and expect it

to sell. With the c-word discredited through half-witted overkill,

anyone called a "cyberpunk" will have to pull their own weight now.

But for those willing to pull weight, it's no big deal. Labels cannot

defend their own integrity; but writers can, and good ones do.

There is another general point to make, which I believe is

important to any real understanding of the Movement. Cyberpunk,

like New Wave before it, was a voice of Bohemia. It came from the

underground, from the outside, from the young and energetic and

disenfranchised. It came from people who didn't know their own

limits, and refused the limits offered them by mere custom and habit.

Not much SF is really Bohemian, and most of Bohemia has little

to do with SF, but there was, and is, much to be gained from the

meeting of the two. SF as a genre, even at its most "conventional," is

very much a cultural underground. SF's influence on the greater

society outside, like the dubious influence of beatniks, hippies, and

punks, is carefully limited. Science fiction, like Bohemia, is a useful

place to put a wide variety of people, where their ideas and actions can

be examined, without the risk of putting those ideas and actions

directly into wider practice. Bohemia has served this function since its

start in the early Industrial Revolution, and the wisdom of this scheme

should be admitted. Most weird ideas are simply weird ideas, and

Bohemia in power has rarely been a pretty sight. Jules Verne as a

writer of adventure novels is one thing; President Verne, General

Verne, or Pope Jules is a much dicier proposition.

Cyberpunk was a voice of Bohemia -- Bohemia in the 1980s.

The technosocial changes loose in contemporary society were bound to

affect its counterculture. Cyberpunk was the literary incarnation of

this phenomenon. And the phenomenon is still growing.

Communication technologies in particular are becoming much less

respectable, much more volatile, and increasingly in the hands of

people you might not introduce to your grandma.

But today, it must be admitted that the cyberpunks -- SF

veterans in or near their forties, patiently refining their craft and

cashing their royalty checks -- are no longer a Bohemian underground.

This too is an old story in Bohemia; it is the standard punishment for

success. An underground in the light of day is a contradiction in terms.

Respectability does not merely beckon; it actively envelops. And in

this sense, "cyberpunk" is even deader than Shiner admits.

Time and chance have been kind to the cyberpunks, but they

themselves have changed with the years. A core doctrine in

Movement theory was "visionary intensity." But it has been some time

since any cyberpunk wrote a truly mind-blowing story, something that

writhed, heaved, howled, hallucinated and shattered the furniture. In

the latest work of these veterans, we see tighter plotting, better

characters, finer prose, much "serious and insightful futurism." But we

also see much less in the way of spontaneous back-flips and crazed

dancing on tables. The settings come closer and closer to the present

day, losing the baroque curlicues of unleashed fantasy: the issues at

stake become something horribly akin to the standard concerns of

middle-aged responsibility. And this may be splendid, but it is not

war. This vital aspect of science fiction has been abdicated, and is open

for the taking. Cyberpunk is simply not there any more.

But science fiction is still alive, still open and developing. And

Bohemia will not go away. Bohemia, like SF, is not a passing fad,

although it breeds fads; like SF, Bohemia is old; as old as industrial

society, of which both SF and Bohemia are integral parts. Cybernetic

Bohemia is not some bizarre advent; when cybernetic Bohemians

proclaim that what they are doing is completely new, they innocently

delude themselves, merely because they are young.

Cyberpunks write about the ecstasy and hazard of flying

cyberspace and Verne wrote about the ecstasy and hazard of FIVE

WEEKS IN A BALLOON, but if you take even half a step outside the

mire of historical circumstance, you can see that these both serve the

same basic social function.

Of course, Verne, a great master, is still in print, while the

verdict is out on cyberpunk. And, of course, Verne got the future all

wrong, except for a few lucky guesses; but so will cyberpunk. Jules

Verne ended up as some kind of beloved rich crank celebrity in the

city government of Amiens. Worse things have happened, I suppose.

As cyberpunk's practitioners bask in unsought legitimacy, it

becomes harder to pretend that cyberpunk was something freakish or

aberrant; it's easier today to see where it came from, and how it got

where it is. Still, it might be thought that allegiance to Jules Verne is a

bizarre declaration for a cyberpunk. It might, for instance, be argued

that Jules Verne was a nice guy who loved his Mom, while the brutish

antihuman cyberpunks advocate drugs, anarchy, brain-plugs and the

destruction of everything sacred.

This objection is bogus. Captain Nemo was a technical anarcho-

terrorist. Jules Verne passed out radical pamphlets in 1848 when the

streets of Paris were strewn with dead. And yet Jules Verne is

considered a Victorian optimist (those who have read him must doubt

this) while the cyberpunks are often declared nihilists (by those who

pick and choose in the canon). Why? It is the tenor of the times, I

think.

There is much bleakness in cyberpunk, but it is an honest

bleakness. There is ecstasy, but there is also dread. As I sit here, one

ear tuned to TV news, I hear the US Senate debating war. And behind

those words are cities aflame and crowds lacerated with airborne

shrapnel, soldiers convulsed with mustard-gas and Sarin.

This generation will have to watch a century of manic waste and

carelessness hit home, and we know it. We will be lucky not to suffer

greatly from ecological blunders already committed; we will be

extremely lucky not to see tens of millions of fellow human beings

dying horribly on television as we Westerners sit in our living rooms

munching our cheeseburgers. And this is not some wacky Bohemian

jeremiad; this is an objective statement about the condition of the

world, easily confirmed by anyone with the courage to look at the

facts.

These prospects must and should effect our thoughts and

expressions and, yes, our actions; and if writers close their eyes to this,

they may be entertainers, but they are not fit to call themselves

science fiction writers. And cyberpunks are science fiction writers --

not a "subgenre" or a "cult," but the thing itself. We deserve this title

and we should not be deprived of it.

But the Nineties will not belong to the cyberpunks. We will be

there working, but we are not the Movement, we are not even "us" any

more. The Nineties will belong to the coming generation, those who

grew up in the Eighties. All power, and the best of luck to the Nineties

underground. I don't know you, but I do know you're out there. Get

on your feet, seize the day. Dance on tables. Make it happen, it can be

done. I know. I've been there.

GURPS' LABOUR LOST

Some months ago, I wrote an article about the raid on Steve

Jackson Games, which appeared in my "Comment" column in the

British science fiction monthly, INTERZONE (#44, Feb 1991).

This updated version, specially re-written for dissemination by

EFF, reflects the somewhat greater knowledge I've gained to

date, in the course of research on an upcoming nonfiction book,

THE HACKER CRACKDOWN: Law and Disorder on the Electronic

Frontier.

The bizarre events suffered by Mr. Jackson and his co-workers,

in my own home town of Austin, Texas, were directly responsible

for my decision to put science fiction aside and to tackle the

purportedly real world of computer crime and electronic

free-expression.

The national crackdown on computer hackers in 1990 was the

largest and best-coordinated attack on computer mischief in

American history. There was Arizona's "Operation Sundevil,"

the sweeping May 8 nationwide raid against outlaw bulletin

boards. The BellSouth E911 case (of which the Jackson raid was

a small and particularly egregious part) was coordinated out of

Chicago. The New York State Police were also very active in

All this vigorous law enforcement activity meant very little to

the narrow and intensely clannish world of science fiction.

All we knew -- and this misperception persisted, uncorrected,

for months -- was that Mr. Jackson had been raided because of

his intention to publish a gaming book about "cyberpunk"

science fiction. The Jackson raid received extensive coverage

in science fiction news magazines (yes, we have these) and

became notorious in the world of SF as "the Cyberpunk Bust."

My INTERZONE article attempted to make the Jackson case

intelligible to the British SF audience.

What possible reason could lead an American federal law

enforcement agency to raid the headquarters of a science-fiction

gaming company? Why did armed teams of city police, corporate

security men, and federal agents roust two Texan

computer-hackers from their beds at dawn, and then deliberately

confiscate thousands of dollars' worth of computer equipment,

including the hackers' common household telephones? Why was an

unpublished book called G.U.R.P.S. Cyberpunk seized by the US

Secret Service and declared "a manual for computer crime?"

These weird events were not parodies or fantasies; no, this was

real.

The first order of business in untangling this bizarre drama is

to understand the players -- who come in entire teams.

Dramatis Personae

PLAYER ONE: The Law Enforcement Agencies.

America's defense against the threat of computer crime is a

confusing hodgepodge of state, municipal, and federal agencies.

Ranked first, by size and power, are the Central Intelligence

Agency (CIA), the National Security Agency (NSA), and the

Federal Bureau of Investigation (FBI), large, potent and

secretive organizations who, luckily, play almost no role in the

Jackson story.

The second rank of such agencies include the Internal Revenue

Service (IRS), the National Aeronatics and Space Administration

(NASA), the Justice Department, the Department of Labor, and

various branches of the defense establishment, especially the

Air Force Office of Special Investigations (AFOSI). Premier

among these groups, however, is the highly-motivated US Secret

Service (USSS), best-known to Britons as the suited,

mirrorshades-toting, heavily-armed bodyguards of the President

of the United States.

Guarding high-ranking federal officials and foreign dignitaries

is a hazardous, challenging and eminently necessary task, which

has won USSS a high public profile. But Abraham Lincoln created

this oldest of federal law enforcement agencies in order to foil

counterfeiting. Due to the historical tribulations of the

Treasury Department (of which USSS is a part), the Secret

Service also guards historical documents, analyzes forgeries,

combats wire fraud, and battles "computer fraud and abuse."

These may seem unrelated assignments, but the Secret Service is

fiercely aware of its duties. It is also jealous of its

bureaucratic turf, especially in computer-crime, where it

formally shares jurisdiction with its traditional rival, the

johnny-come-lately FBI.

As the use of plastic money has spread, and their

long-established role as protectors of the currency has faded in

importance, the Secret Service has moved aggressively into the

realm of electronic crime. Unlike the lordly NSA, CIA, and FBI,

which generally can't be bothered with domestic computer

mischief, the Secret Service is noted for its street-level

enthusiasm.

The third-rank of law enforcement are the local "dedicated

computer crime units." There are very few such groups,

pitifully undermanned. They struggle hard for their funding and

the vital light of publicity. It's difficult to make

white-collar computer crimes seem pressing, to an American

public that lives in terror of armed and violent street-crime.

These local groups are small -- often, one or two officers,

computer hobbyists, who have drifted into electronic

crimebusting because they alone are game to devote time and

effort to bringing law to the electronic frontier. California's

Silicon Valley has three computer-crime units. There are

others in Florida, Illinois, Ohio, Maryland, Texas, Colorado,

and a formerly very active one in Arizona -- all told, though,

perhaps only fifty people nationwide.

The locals do have one great advantage, though. They all know

one another. Though scattered across the country, they are

linked by both public-sector and private-sector professional

societies, and have a commendable subcultural esprit-de-corps.

And in the well-manned Secret Service, they have willing

national-level assistance.

PLAYER TWO: The Telephone Companies.

In the early 80s, after years of bitter federal court battle,

America's telephone monopoly was pulverized. "Ma Bell," the

national phone company, became AT&T, AT&T Industries, and the

regional "Baby Bells," all purportedly independent companies,

who compete with new communications companies and other

long-distance providers. As a class, however, they are all

sorely harassed by fraudsters, phone phreaks, and computer

hackers, and they all maintain computer-security experts. In a

lot of cases these "corporate security divisions" consist of

just one or two guys, who drifted into the work from backgrounds

in traditional security or law enforcement. But, linked by

specialized security trade journals and private sector trade

groups, they all know one another.

PLAYER THREE: The Computer Hackers.

The American "hacker" elite consists of about a hundred people,

who all know one another. These are the people who know enough

about computer intrusion to baffle corporate security and alarm

police (and who, furthermore, are willing to put their intrusion

skills into actual practice). The somewhat older

subculture of "phone-phreaking," once native only to the phone

system, has blended into hackerdom as phones have become digital

and computers have been netted-together by telephones. "Phone

phreaks," always tarred with the stigma of rip-off artists, are

nowadays increasingly hacking PBX systems and cellular phones.

These practices, unlike computer-intrusion, offer direct and

easy profit to fraudsters.

There are legions of minor "hackers," such as the "kodez kidz,"

who purloin telephone access codes to make free (i.e., stolen)

phone calls. Code theft can be done with home computers, and

almost looks like real "hacking," though "kodez kidz" are

regarded with lordly contempt by the elite. "Warez d00dz," who

copy and pirate computer games and software, are a thriving

subspecies of "hacker," but they played no real role in the

crackdown of 1990 or the Jackson case. As for the dire minority

who create computer viruses, the less said the better.

The princes of hackerdom skate the phone-lines, and computer

networks, as a lifestyle. They hang out in loose,

modem-connected gangs like the "Legion of Doom" and the "Masters

of Destruction." The craft of hacking is taught through

"bulletin board systems," personal computers that carry

electronic mail and can be accessed by phone. Hacker bulletin

boards generally sport grim, scary, sci-fi heavy metal names

like BLACK ICE -- PRIVATE or SPEED DEMON ELITE. Hackers

themselves often adopt romantic and highly suspicious tough-guy

monickers like "Necron 99," "Prime Suspect," "Erik Bloodaxe,"

"Malefactor" and "Phase Jitter." This can be seen as a kind of

cyberpunk folk-poetry -- after all, baseball players also have

colorful nicknames. But so do the Mafia and the Medellin

Cartel.

PLAYER FOUR: The Simulation Gamers.

Wargames and role-playing adventures are an old and honored

pastime, much favored by professional military strategists and

H.G. Wells, and now played by hundreds of thousands of

enthusiasts throughout North America, Europe and Japan. In

today's market, many simulation games are computerized, making

simulation gaming a favorite pastime of hackers, who dote on

arcane intellectual challenges and the thrill of doing simulated

mischief.

Modern simulation games frequently have a heavily

science-fictional cast. Over the past decade or so, fueled by

very respectable royalties, the world of simulation gaming has

increasingly permeated the world of science-fiction publishing.

TSR, Inc., proprietors of the best-known role-playing game,

"Dungeons and Dragons," own the venerable science-fiction

magazine "Amazing." Gaming-books, once restricted to hobby

outlets, now commonly appear in chain-stores like B. Dalton's

and Waldenbooks, and sell vigorously.

Steve Jackson Games, Inc., of Austin, Texas, is a games company

of the middle rank. In early 1990, it employed fifteen people.

In 1989, SJG grossed about half a million dollars. SJG's Austin

headquarters is a modest two-story brick office-suite, cluttered

with phones, photocopiers, fax machines and computers. A

publisher's digs, it bustles with semi-organized activity and is

littered with glossy promotional brochures and dog-eared SF

novels. Attached to the offices is a large tin-roofed warehouse

piled twenty feet high with cardboard boxes of games and books.

This building was the site of the "Cyberpunk Bust."

A look at the company's wares, neatly stacked on endless rows of

cheap shelving, quickly shows SJG's long involvement with the

Science Fiction community. SJG's main product, the Generic

Universal Role-Playing System or G.U.R.P.S., features licensed

and adapted works from many genre writers. There is GURPS Witch

World, GURPS Conan, GURPS Riverworld, GURPS Horseclans, many

names eminently familiar to SF fans. (GURPS Difference Engine

is currently in the works.) GURPS Cyberpunk, however, was to

be another story entirely.

PLAYER FIVE: The Science Fiction Writers.

The "cyberpunk" SF writers are a small group of mostly

college-educated white litterateurs, without conspicuous

criminal records, scattered through the US and Canada. Only

one, Rudy Rucker, a professor of computer science in Silicon

Valley, would rank with even the humblest computer hacker.

However, these writers all own computers and take an intense,

public, and somewhat morbid interest in the social ramifications

of the information industry. Despite their small numbers, they

all know one another, and are linked by antique print-medium

publications with unlikely names like SCIENCE FICTION EYE, ISAAC

ASIMOV'S SCIENCE FICTION MAGAZINE, OMNI and INTERZONE.

PLAYER SIX: The Civil Libertarians.

This small but rapidly growing group consists of heavily

politicized computer enthusiasts and heavily cyberneticized

political activists: a mix of wealthy high-tech entrepreneurs,

veteran West Coast troublemaking hippies, touchy journalists,

and toney East Coast civil rights lawyers. They are all getting

to know one another.

We now return to our story. By 1988, law enforcement

officials, led by contrite teenage informants, had thoroughly

permeated the world of underground bulletin boards, and were

alertly prowling the nets compiling dossiers on wrongdoers.

While most bulletin board systems are utterly harmless, some few

had matured into alarming reservoirs of forbidden knowledge.

One such was BLACK ICE -- PRIVATE, located "somewhere in the 607

area code," frequented by members of the "Legion of Doom" and

notorious even among hackers for the violence of its rhetoric,

which discussed sabotage of phone-lines, drug-manufacturing

techniques, and the assembly of home-made bombs, as well as a

plethora of rules-of-thumb for penetrating computer security.

Of course, the mere discussion of these notions is not illegal

-- many cyberpunk SF stories positively dote on such ideas, as

do hundreds of spy epics, techno-thrillers and adventure novels.

It was no coincidence that "ICE," or "Intrusion Countermeasures

Electronics," was a term invented by cyberpunk writer Tom

Maddox, and "BLACK ICE," or a computer-defense that fries the

brain of the unwary trespasser, was a coinage of William Gibson.

A reference manual from the US National Institute of Justice,

"Dedicated Computer Crime Units" by J. Thomas McEwen, suggests

that federal attitudes toward bulletin-board systems are

ambivalent at best:

"There are several examples of how bulletin boards have been

used in support of criminal activities.... (B)ulletin boards

were used to relay illegally obtained access codes into computer

service companies. Pedophiles have been known to leave

suggestive messages on bulletin boards, and other sexually

oriented messages have been found on bulletin boards. Members

of cults and sects have also communicated through bulletin

boards. While the storing of information on bulletin boards may

not be illegal, the use of bulletin boards has certainly

advanced many illegal activities."

Here is a troubling concept indeed: invisible electronic

pornography, to be printed out at home and read by sects and

cults. It makes a mockery of the traditional law-enforcement

techniques concerning the publication and prosecution of smut.

In fact, the prospect of large numbers of antisocial

conspirators, congregating in the limbo of cyberspace without

official oversight of any kind, is enough to trouble the sleep

of anyone charged with maintaining public order.

Even the sternest free-speech advocate will likely do some

headscratching at the prospect of digitized "anarchy files"

teaching lock-picking, pipe-bombing, martial arts techniques,

and highly unorthodox uses for shotgun shells, especially when

these neat-o temptations are distributed freely to any teen (or

pre-teen) with a modem.

These may be largely conjectural problems at present, but the

use of bulletin boards to foment hacker mischief is real. Worse

yet, the bulletin boards themselves are linked, sharing their

audience and spreading the wicked knowledge of security flaws in

the phone network, and in a wide variety of academic, corporate

and governmental computer systems.

This strength of the hackers is also a weakness, however. If

the boards are monitored by alert informants and/or officers,

the whole wicked tangle can be seized all along its extended

electronic vine, rather like harvesting pumpkins.

The war against hackers, including the "Cyberpunk Bust," was

primarily a war against hacker bulletin boards. It was, first

and foremost, an attack against the enemy's means of

information.

This basic strategic insight supplied the tactics for the

crackdown of 1990. The variant groups in the national

subculture of cyber-law would be kept apprised, persuaded to

action, and diplomatically martialled into effective strike

position. Then, in a burst of energy and a glorious blaze of

publicity, the whole nest of scofflaws would be wrenched up root

and branch. Hopefully, the damage would be permanent; if not,

the swarming wretches would at least keep their heads down.

"Operation Sundevil," the Phoenix-inspired crackdown of May

8,1990, concentrated on telephone code-fraud and credit-card

abuse, and followed this seizure plan with some success. Boards

went down all over America, terrifying the underground and

swiftly depriving them of at least some of their criminal

instruments. It also saddled analysts with some 24,000 floppy

disks, and confronted harried Justice Department prosecutors

with the daunting challenge of a gigantic nationwide hacker

show-trial involving highly technical issues in dozens of

jurisdictions. As of July 1991, it must be questioned whether

the climate is right for an action of this sort, especially

since several of the most promising prosecutees have already

been jailed on other charges.

"Sundevil" aroused many dicey legal and constitutional

questions, but at least its organizers were spared the spectacle

of seizure victims loudly proclaiming their innocence -- (if one

excepts Bruce Esquibel, sysop of "Dr. Ripco," an anarchist board

in Chicago).

The activities of March 1, 1990, however, including the Jackson

case, were the inspiration of the Chicago-based Computer Fraud

and Abuse Task Force. At telco urging, the Chicago group were

pursuing the purportedly vital "E911 document" with headlong

energy. As legal evidence, this proprietary Bell South

document was to prove a very weak reed in the Craig Neidorf

trial, which ended in a humiliating dismissal and a triumph for

Neidorf. As of March 1990, however, this purloined data-file

seemed a red-hot chunk of contraband, and the decision was made

to track it down wherever it might have gone, and to shut down

any board that had touched it -- or even come close to it.

In the meantime, however -- early 1990 -- Mr. Loyd Blankenship,

an employee of Steve Jackson Games, an accomplished hacker, and

a sometime member and file-writer for the Legion of Doom, was

contemplating a "cyberpunk" simulation-module for the

flourishing GURPS gaming-system.

The time seemed ripe for such a product, which had already been

proven in the marketplace. The first games-company out of the

gate, with a product boldly called "Cyberpunk" in defiance of

possible infringement-of-copyright suits, had been an upstart

group called R. Talsorian. Talsorian's "Cyberpunk" was a fairly

decent game, but the mechanics of the simulation system sucked,

and the nerds who wrote the manual were the kimd of half-hip

twits who wrote their own fake rock lyrics and, worse yet,

published them. The game sold like crazy, though.

The next "cyberpunk" game had been the even more successful

"Shadowrun" by FASA Corporation. The mechanics of this game

were fine, but the scenario was rendered moronic by lame

fantasy elements like orcs, dwarves, trolls, magicians, and

dragons -- all highly ideologically-incorrect, according to the

hard-edged, high-tech standards of cyberpunk science fiction.

No true cyberpunk fan could play this game without vomiting,

despite FASA's nifty T-shirts and street-samurai lead figurines.

Lured by the scent of money, other game companies were champing

at the bit. Blankenship reasoned that the time had come for a

real "Cyberpunk" gaming-book -- one that the princes of

computer-mischief in the Legion of Doom could play without

laughing themselves sick. This book, GURPS Cyberpunk, would

reek of culturally on-line authenticity.

Hot discussion soon raged on the Steve Jackson Games electronic

bulletin board, the "Illuminati BBS." This board was named

after a bestselling SJG card-game, involving antisocial sects

and cults who war covertly for the domination of the world.

Gamers and hackers alike loved this board, with its meticulously

detailed discussions of pastimes like SJG's "Car Wars," in which

souped-up armored hot-rods with rocket-launchers and heavy

machine-guns do battle on the American highways of the future.

While working, with considerable creative success, for SJG,

Blankenship himself was running his own computer bulletin board,

"The Phoenix Project," from his house. It had been ages --

months, anyway -- since Blankenship, an increasingly sedate

husband and author, had last entered a public phone-booth

without a supply of pocket-change. However, his intellectual

interest in computer-security remained intense. He was pleased

to notice the presence on "Phoenix" of Henry Kluepfel, a

phone-company security professional for Bellcore. Such

contacts were risky for telco employees; at least one such

gentleman who reached out to the hacker underground had been

accused of divided loyalties and summarily fired. Kluepfel, on

the other hand, was bravely engaging in friendly banter with

heavy-dude hackers and eager telephone-wannabes. Blankenship

did nothing to spook him away, and Kluepfel, for his part,

passed dark warnings about "Phoenix Project" to the Chicago

group. "Phoenix Project" glowed with the radioactive presence

of the E911 document, passed there in a copy of Craig Neidorf's

electronic hacker fan-magazine, Phrack.

"Illuminati" was prominently mentioned on the Phoenix Project.

Phoenix users were urged to visit Illuminati, to discuss the

upcoming "cyberpunk" game and possibly lend their expertise.

It was also frankly hoped that they would spend some money on

SJG games.

Illuminati and Phoenix had become two ripe pumpkins on the

criminal vine.

Hacker busts were nothing new. They had always been somewhat

problematic for the authorities. The offenders were generally

high-IQ white juveniles with no criminal record. Public

sympathy for the phone companies was limited at best. Trials

often ended in puzzled dismissals or a slap on the wrist. But

the harassment suffered by "the business community" -- always

the best friend of law enforcement -- was real, and highly

annoying both financially and in its sheer irritation to the

target corporation.

Through long experience, law enforcement had come up with an

unorthodox but workable tactic. This was to avoid any trial at

all, or even an arrest. Instead, somber teams of grim police

would swoop upon the teenage suspect's home and box up his

computer as "evidence." If he was a good boy, and promised

contritely to stay out of trouble forthwith, the highly

expensive equipment might be returned to him in short order. If

he was a hard-case, though, too bad. His toys could stay

boxed-up and locked away for a couple of years.

The busts in Austin were an intensification of this

tried-and-true technique. There were adults involved in this

case, though, reeking of a hardened bad-attitude. The supposed

threat to the 911 system, apparently posed by the E911 document,

had nerved law enforcement to extraordinary effort. The 911

system is, of course, the emergency dialling system used by the

police themselves. Any threat to it was a direct and insolent

hacker menace to the electronic home-turf of American law

enforcement.

Had Steve Jackson been arrested and directly accused of a plot

to destroy the 911 system, the resultant embarrassment would

likely have been sharp, but brief. The Chicago group, instead,

chose total operational security. They may have suspected that

their search for E911, once publicized, would cause that

"dangerous" document to spread like wildfire throughout the

underground. Instead, they allowed the misapprehension to

spread that they had raided Steve Jackson to stop the

publication of a book: GURPS Cyberpunk. This was a grave

public-relations blunder which caused the darkest fears and

suspicions to spread -- not in the hacker underground, but

among the general public.

On March 1, 1990, 21-year-old hacker Chris Goggans (aka "Erik

Bloodaxe") was wakened by a police revolver levelled at his

head. He watched, jittery, as Secret Service agents

appropriated his 300 baud terminal and, rifling his files,

discovered his treasured source-code for the notorious Internet

Worm. Goggans, a co-sysop of "Phoenix Project" and a wily

operator, had suspected that something of the like might be

coming. All his best equipment had been hidden away elsewhere.

They took his phone, though, and considered hauling away his

hefty arcade-style Pac-Man game, before deciding that it was

simply too heavy. Goggans was not arrested. To date, he has

never been charged with a crime. The police still have what

they took, though.

Blankenship was less wary. He had shut down "Phoenix" as rumors

reached him of a crackdown coming. Still, a dawn raid rousted

him and his wife from bed in their underwear, and six Secret

Service agents, accompanied by a bemused Austin cop and a

corporate security agent from Bellcore, made a rich haul. Off

went the works, into the agents' white Chevrolet minivan: an

IBM PC-AT clone with 4 meg of RAM and a 120-meg hard disk; a

Hewlett-Packard LaserJet II printer; a completely legitimate and

highly expensive SCO-Xenix 286 operating system; Pagemaker disks

and documentation; the Microsoft Word word-processing program;

Mrs. Blankenship's incomplete academic thesis stored on disk;

and the couple's telephone. All this property remains in police

custody today.

The agents then bundled Blankenship into a car and it was off

the Steve Jackson Games in the bleak light of dawn. The fact

that this was a business headquarters, and not a private

residence, did not deter the agents. It was still early; no one

was at work yet. The agents prepared to break down the door,

until Blankenship offered his key.

The exact details of the next events are unclear. The agents

would not let anyone else into the building. Their search

warrant, when produced, was unsigned. Apparently they

breakfasted from the local "Whataburger," as the litter from

hamburgers was later found inside. They also extensively

sampled a bag of jellybeans kept by an SJG employee. Someone

tore a "Dukakis for President" sticker from the wall.

SJG employees, diligently showing up for the day's work, were

met at the door. They watched in astonishment as agents

wielding crowbars and screwdrivers emerged with captive

machines. The agents wore blue nylon windbreakers with "SECRET

SERVICE" stencilled across the back, with running-shoes and

jeans. Confiscating computers can be heavy physical work.

No one at Steve Jackson Games was arrested. No one was accused

of any crime. There were no charges filed. Everything

appropriated was officially kept as "evidence" of crimes never

specified. Steve Jackson will not face a conspiracy trial over

the contents of his science-fiction gaming book. On the

contrary, the raid's organizers have been accused of grave

misdeeds in a civil suit filed by EFF, and if there is any trial

over GURPS Cyberpunk it seems likely to be theirs.

The day after the raid, Steve Jackson visited the local Secret

Service headquarters with a lawyer in tow. There was trouble

over GURPS Cyberpunk, which had been discovered on the

hard-disk of a seized machine. GURPS Cyberpunk, alleged a

Secret Service agent to astonished businessman Steve Jackson,

was "a manual for computer crime."

"It's science fiction," Jackson said.

"No, this is real." This statement was repeated several times,

by several agents. This is not a fantasy, no, this is real.

Jackson's ominously accurate game had passed from pure, obscure,

small-scale fantasy into the impure, highly publicized,

large-scale fantasy of the hacker crackdown. No mention was

made of the real reason for the search, the E911 document.

Indeed, this fact was not discovered until the Jackson

search-warrant was unsealed by his EFF lawyers, months later.

Jackson was left to believe that his board had been seized

because he intended to publish a science fiction book that law

enforcement considered too dangerous to see print. This

misconception was repeated again and again, for months, to an

ever-widening audience. The effect of this statement on the

science fiction community was, to say the least, striking.

GURPS Cyberpunk, now published and available from Steve Jackson

Games (Box 18957, Austin, Texas 78760), does discuss some of the

commonplaces of computer-hacking, such as searching through

trash for useful clues, or snitching passwords by boldly lying

to gullible users. Reading it won't make you a hacker, any

more than reading Spycatcher will make you an agent of MI5.

Still, this bold insistence by the Secret Service on its

authenticity has made GURPS Cyberpunk the Satanic Verses of

simulation gaming, and has made Steve Jackson the first

martyr-to-the-cause for the computer world's civil libertarians.

From the beginning, Steve Jackson declared that he had committed

no crime, and had nothing to hide. Few believed him, for it

seemed incredible that such a tremendous effort by the

government would be spent on someone entirely innocent.

Surely there were a few stolen long-distance codes in

"Illuminati," a swiped credit-card number or two -- something.

Those who rallied to the defense of Jackson were publicly warned

that they would be caught with egg on their face when the real

truth came out, "later." But "later" came and went. The fact

is that Jackson was innocent of any crime. There was no case

against him; his activities were entirely legal. He had simply

been consorting with the wrong sort of people.

In fact he was the wrong sort of people. His attitude stank.

He showed no contrition; he scoffed at authority; he gave aid

and comfort to the enemy; he was trouble. Steve Jackson comes

from subcultures -- gaming, science fiction -- that have always

smelled to high heaven of troubling weirdness and deep-dyed

unorthodoxy. He was important enough to attract repression,

but not important enough, apparently, to deserve a straight

answer from those who had raided his property and destroyed his

livelihood.

The American law-enforcement community lacks the manpower and

resources to prosecute hackers successfully, one by one, on the

merits of the cases against them. The cyber-police to date

have settled instead for a cheap "hack" of the legal system: a

quasi-legal tactic of seizure and "deterrence." Humiliate and

harass a few ringleaders, the philosophy goes, and the rest will

fall into line. After all, most hackers are just kids. The few

grown-ups among them are sociopathic geeks, not real players in

the political and legal game. And in the final analysis, a

small company like Jackson's lacks the resources to make any

real trouble for the Secret Service.

But Jackson, with his conspiracy-soaked bulletin board and his

seedy SF-fan computer-freak employees, is not "just a kid." He

is a publisher, and he was battered by the police in the full

light of national publicity, under the shocked gaze of

journalists, gaming fans, libertarian activists and millionaire

computer entrepreneurs, many of whom were not "deterred," but

genuinely aghast.

"What," reasons the author, "is to prevent the Secret Service

from carting off my word-processor as 'evidence' of some

non-existent crime?"

"What would I do," thinks the small-press owner, "if someone

took my laser-printer?"

Even the computer magnate in his private jet remembers his

heroic days in Silicon Valley when he was soldering semi-legal

circuit boards in a small garage.

Hence the establishment of the Electronic Frontier Foundation.

The sherriff had shown up in Tombstone to clean up that outlaw

town, but the response of the citizens was swift and

well-financed.

Steve Jackson was provided with a high-powered lawyer

specializing in Constitutional freedom-of-the-press issues.

Faced with this, a markedly un-contrite Secret Service returned

Jackson's machinery, after months of delay -- some of it broken,

with valuable data lost. Jackson sustained many thousands of

dollars in business losses, from failure to meet deadlines and

loss of computer-assisted production.

Half the employees of Steve Jackson Games were sorrowfully

laid-off. Some had been with the company for years -- not

statistics, these people, not "hackers" of any stripe, but

bystanders, citizens, deprived of their livelihoods by the

zealousness of the March 1 seizure. Some have since been

re-hired -- perhaps all will be, if Jackson can pull his company

out of its persistent financial hole. Devastated by the raid,

the company would surely have collapsed in short order -- but

SJG's distributors, touched by the company's plight and feeling

some natural subcultural solidarity, advanced him money to

scrape along.

In retrospect, it is hard to see much good for anyone at all in

the activities of March 1. Perhaps the Jackson case has served

as a warning light for trouble in our legal system; but that's

not much recompense for Jackson himself. His own unsought fame

may be helpful, but it doesn't do much for his unemployed

co-workers. In the meantime, "hackers" have been vilified and

demonized as a national threat. "Cyberpunk," a literary term,

has become a synonym for computer criminal. The cyber-police

have leapt where angels fear to tread. And the phone companies

have badly overstated their case and deeply embarrassed their

protectors.

But sixteen months later, Steve Jackson suspects he may yet pull

through. Illuminati is still on-line. GURPS Cyberpunk, while

it failed to match Satanic Verses, sold fairly briskly. And

SJG headquarters, the site of the raid, will soon be the site of

Cyberspace Weenie Roast to start an Austin chapter of the

Electronic Frontier Foundation. Bring your own beer.

Game conference speech: "The Wonderful Power of Storytelling"

From the Computer Game Developers Conference, March 1991, San Jose CA

Thank you very much for that introduction. I'd like to thank the

conference committee for their hospitality and kindness -- all

the cola you can drink -- and mind you those were genuine

twinkies too, none of those newfangled "Twinkies Lite" we've

been seeing too much of lately.

So anyway my name is Bruce Sterling and I'm a science fiction

writer from Austin Texas, and I'm here to deliver my speech now,

which I like to call "The Wonderful Power of Storytelling." I

like to call it that, because I plan to make brutal fun of that

whole idea... In fact I plan to flame on just any moment now, I

plan to cut loose, I plan to wound and scald tonight.... Because

why not, right? I mean, we're all adults, we're all

professionals here... I mean, professionals in totally different

arts, but you know, I can sense a certain simpatico vibe....

Actually I feel kind of like a mosasaur talking to dolphins

here.... We have a lot in common, we both swim, we both have big

sharp teeth, we both eat fish... But you look like a broadminded

crowd, so I'm sure you won't mind that I'm basically, like,

*reptilian*....

So anyway, you're probably wondering why I'm here tonight, some

hopeless dipshit literary author... and when am I going to get

started on the virtues and merits of the prose medium and its

goddamned wonderful storytelling. I mean, what else can I talk

about? What the hell do I know about game design? I don't even

know that the most lucrative target machine today is an IBM PC

clone with a 16 bit 8088 running at 5 MHZ. If you start talking

about depth of play versus presentation, I'm just gonna to stare

at you with blank incomprehension....

I'll tell you straight out why I'm here tonight. Why should I

even try to hide the sordid truth from a crowd this

perspicacious.... You see, six months ago I was in Austria at

this Electronic Arts Festival, which was a situation almost as

unlikely as this one, and my wife Nancy and I are sitting there

with William Gibson and Deb Gibson feeling very cool and rather

jetlagged and crispy around the edges, and in walks this

*woman.* Out of nowhere. Like J. Random Attractive Redhead,

right. And she sits down with her coffeecup right at our table.

And we peer at each other's namebadges, right, like, *who is

this person.* And her name is Brenda Laurel.

So what do I say? I say to this total stranger, I say. "Hey. Are

you the Brenda Laurel who did that book on *the art of the

computer-human interface*? You *are*? Wow, I loved that book."

And yes -- that's why I'm here as your guest speaker tonight,

ladies and gentleman. It's because I can think fast on my feet.

It's because I'm the kind of author who likes to hang out in

Adolf Hitler's home town with the High Priestess of Weird.

So ladies and gentlemen unfortunately I can't successfully

pretend that I know much about your profession. I mean actually

I do know a *few* things about your profession.... For instance,

I was on the far side of the Great Crash of 1984. I was one of

the civilian crashees, meaning that was about when I gave up

twitch games. That was when I gave up my Atari 800. As to why my

Atari 800 became a boat-anchor I'm still not sure.... It was

quite mysterious when it happened, it was inexplicable, kind of

like the passing of a pestilence or the waning of the moon. If I

understood this phenomenon I think I would really have my teeth

set into something profound and vitally interesting... Like, my

Atari still works today, I still own it. Why don't I get it out

of its box and fire up a few cartridges? Nothing physical

preventing me. Just some subtle but intense sense of revulsion.

Almost like a Sartrean nausea. Why this should be attached to a

piece of computer hardware is difficult to say.

My favorite games nowadays are Sim City, Sim Earth and Hidden

Agenda... I had Balance of the Planet on my hard disk, but I was

so stricken with guilt by the digitized photo of the author and

his spouse that I deleted the game, long before I could figure

out how to keep everybody on the Earth from starving....

Including myself and the author....

I'm especially fond of SimEarth. SimEarth is like a goldfish

bowl. I also have the actual goldfish bowl in the *After Dark*

Macintosh screen saver, but its charms waned for me, possibly

because the fish don't drive one another into extinction. I

theorize that this has something to do with a breakdown of the

old dichotomy of twitch games versus adventure, you know, arcade

zombie versus Mensa pinhead...

I can dimly see a kind of transcendance in electronic

entertainment coming with things like SimEarth, they seem like a

foreshadowing of what Alvin Toffler called the "intelligent

environment"... Not "games" in a classic sense, but things that

are just going on in the background somewhere, in an attractive

and elegant fashion, kind of like a pet cat... I think this kind

of digital toy might really go somewhere interesting.

What computer entertainment lacks most I think is a sense of

mystery. It's too left-brain.... I think there might be real

promise in game designs that offer less of a sense of nitpicking

mastery and control, and more of a sense of sleaziness and

bluesiness and smokiness. Not neat tinkertoy puzzles to be

decoded, not "treasure-hunts for assets," but creations with

some deeper sense of genuine artistic mystery.

I don't know if you've seen the work of a guy called William

Latham.... I got his work on a demo reel from Media Magic. I

never buy movies on video, but I really live for raw

computer-graphic demo reels. This William Latham is a heavy

dude... His tech isn't that impressive, he's got some kind of

fairly crude IBM mainframe cad-cam program in Winchester

England.... The thing that's most immediately striking about

Latham's computer artworks -- *ghost sculptures* he calls them

-- is that the guy really possesses a sense of taste. Fractal

art tends to be quite garish. Latham's stuff is very fractally

and organic, it's utterly weird, but at the same time it's very

accomplished and subtle. There's a quality of ecstasy and dread

to it... there's a sense of genuine enchantment there. A lot of

computer games are stuffed to the gunwales with enchanters and

wizards and so-called magic, but that kind of sci-fi cod

mysticism seems very dime-store stuff by comparison with Latham.

I like to imagine the future of computer games as being

something like the Steve Jackson Games bust by the Secret

Service, only in this case what they were busting wouldn't have

been a mistake, it would have been something actually quite

seriously inexplicable and possibly even a genuine cultural

threat.... Something of the sort may come from virtual reality.

I rather imagine something like an LSD backlash occuring there;

something along the lines of: "Hey we have something here that

can really seriously boost your imagination!" "Well, Mr

Developer, I'm afraid we here in the Food Drug and Software

Administration don't really approve of that." That could happen.

I think there are some visionary computer police around who are

seriously interested in that prospect, they see it as a very

promising growing market for law enforcement, it's kind of their

version of a golden vaporware.

I now want to talk some about the differences between your art

and my art. My art, science fiction writing, is pretty new as

literary arts go, but it labors under the curse of three

thousand years of literacy. In some weird sense I'm in direct

competition with Homer and Euripides. I mean, these guys aren't

in the SFWA, but their product is still taking up valuable

rack-space. You guys on the other hand get to reinvent

everything every time a new platform takes over the field. This

is your advantage and your glory. This is also your curse. It's

a terrible kind of curse really.

This is a lesson about cultural expression nowadays that has

applications to everybody. This is part of living in the

Information Society. Here we are in the 90s, we have these

tremendous information-handling, information-producing

technologies. We think it's really great that we can have groovy

unleashed access to all these different kinds of data, we can

own books, we can own movies on tape, we can access databanks,

we can buy computer-games, records, music, art.... A lot of our

art aspires to the condition of software, our art today wants to

be digital... But our riches of information are in some deep and

perverse sense a terrible burden to us. They're like a cognitive

load. As a digitized information-rich culture nowadays, we have

to artificially invent ways to forget stuff. I think this is the

real explanation for the triumph of compact disks.

Compact disks aren't really all that much better than vinyl

records. What they make up in fidelity they lose in groovy cover

art. What they gain in playability they lose in presentation.

The real advantage of CDs is that they allow you to forget all

your vinyl records. You think you love this record collection

that you've amassed over the years. But really the sheer choice,

the volume, the load of memory there is secretly weighing you

down. You're never going to play those Alice Cooper albums

again, but you can't just throw them away, because you're a

culture nut.

But if you buy a CD player you can bundle up all those records

and put them in attic boxes without so much guilt. You can

pretend that you've stepped up a level, that now you're even

more intensely into music than you ever were; but on a practical

level what you're really doing is weeding this junk out of your

life. By dumping the platform you dump everything attached to

the platform and my god what a blessed secret relief. What a

relief not to remember it, not to think about it, not to have it

take up disk-space in your head.

Computer games are especially vulnerable to this because they

live and breathe through the platform. But something rather

similar is happening today to fiction as well.... What you see

in science fiction nowadays is an amazing tonnage of product

that is shuffled through the racks faster and faster.... If a

science fiction paperback stays available for six weeks, it's a

miracle. Gross sales are up, but individual sales are off...

Science fiction didn't even used to be *published* in book form,

when a science fiction *book* came out it would be in an edition

of maybe five hundred copies and these weirdo Golden Age SF fans

would cling on to every copy as if it were made of platinum....

But now they come out and they are made to vanish as soon as

possible. In fact to a great extent they're designed by their

lame hack authors to vanish as soon as possible. They're cliches

because cliches are less of a cognitive load. You can write a

whole trilogy instead, bet you can't eat just one...

Nevertheless they're still objects in the medium of print. They

still have the cultural properties of print.

Culturally speaking they're capable of lasting a long time

because they can be replicated faithfully in new editions that

have all the same properties as the old ones. Books are

independent of the machineries of book production, the platforms

of publishing. Books don't lose anything by being reprinted by a

new machine, books are stubborn, they remain the same work of

art, they carry the same cultural aura. Books are hard to kill.

MOBY DICK for instance bombed when it came out, it wasn't until

the 1920s that MOBY DICK was proclaimed a masterpiece, and then

it got printed in millions. Emily Dickinson didn't even publish

books, she just wrote these demented little poems with a quill

pen and hid them in her desk, but they still fought their way

into the world, and lasted on and on and on. It's damned hard to

get rid of Emily Dickinson, she hangs on like a tick in a dog's

ear. And everybody who writes from then on in some sense has to

measure up to this woman. In the art of book-writing the

classics are still living competition, they tend to elevate the

entire art-form by their persistent presence.

I've noticed though that computer game designers don't look much

to the past. All their idealized classics tend to be in reverse,

they're projected into the future. When you're a game designer

and you're waxing very creative and arty, you tend to measure

your work by stuff that doesn't exist yet. Like now we only have

floppies, but wait till we get CD-ROM. Like now we can't have

compelling lifelike artificial characters in the game, but wait

till we get AI. Like now we waste time porting games between

platforms, but wait till there's just one standard. Like now

we're just starting with huge multiplayer games, but wait till

the modem networks are a happening thing. And I -- as a game

designer artiste -- it's my solemn duty to carry us that much

farther forward toward the beckoning grail....

For a novelist like myself this is a completely alien paradigm.

I can see that it's very seductive, but at the same time I can't

help but see that the ground is crumbling under your feet. Every

time a platform vanishes it's like a little cultural apocalypse.

And I can imagine a time when all the current platforms might

vanish, and then what the hell becomes of your entire mode of

expression? Alan Kay -- he's a heavy guy, Alan Kay -- he says

that computers may tend to shrink and vanish into the

environment, into the walls and into clothing.... Sounds pretty

good.... But this also means that all the joysticks vanish, all

the keyboards, all the repetitive strain injuries.

I'm sure you could play some kind of computer game with very

intelligent, very small, invisible computers.... You could have

some entertaining way to play with them, or more likely they

would have some entertaining way to play with you. But then

imagine yourself growing up in that world, being born in that

world. You could even be a computer game designer in that world,

but how would you study the work of your predecessors? How would

you physically *access* and *experience* the work of your

predecessors? There's a razor-sharp cutting edge in this

art-form, but what happened to all the stuff that got sculpted?

As I was saying, I don't think it's any accident that this is

happening.... I don't think that as a culture today we're very

interested in tradition or continuity. No, we're a lot more

interested in being a New Age and a revolutionary epoch, we long

to reinvent ourselves every morning before breakfast and never

grow old. We have to run really fast to stay in the same place.

We've become used to running, if we sit still for a while it

makes us feel rather stale and panicky. We'd miss those

sixty-hour work weeks.

And much the same thing is happening to books today too.... Not

just technically, but ideologically. I don't know if you're

familiar at all with literary theory nowadays, with terms like

deconstructionism, postmodernism.... Don't worry, I won't talk

very long about this.... It can make you go nuts, that stuff,

and I don't really recommend it, it's one of those fields of

study where it's sometimes wise to treasure your ignorance....

But the thing about the new literary theory that's remarkable,

is that it makes a really violent break with the past.... These

guys don't take the books of the past on their own cultural

terms. When you're deconstructing a book it's like you're

psychoanalyzing it, you're not studying it for what it says,

you're studying it for the assumptions it makes and the cultural

reasons for its assemblage.... What this essentially means is

that you're not letting it touch you, you're very careful not to

let it get its message through or affect you deeply or

emotionally in any way. You're in a position of complete

psychological and technical superiority to the book and its

author... This is a way for modern literateurs to handle this

vast legacy of the past without actually getting any of the

sticky stuff on you. It's like it's dead. It's like the next

best thing to not having literature at all. For some reason this

feels really good to people nowadays.

But even that isn't enough, you know.... There's talk nowadays

in publishing circles about a new device for books, called a

ReadMan. Like a Walkman only you carry it in your hands like

this.... Has a very nice little graphics screen, theoretically,

a high-definition thing, very legible.... And you play your

books on it.... You buy the book as a floppy and you stick it

in... And just think, wow you can even have graphics with your

book... you can have music, you can have a soundtrack....

Narration.... Animated illustrations... Multimedia... it can

even be interactive.... It's the New Hollywood for Publisher's

Row, and at last books can aspire to the exalted condition of

movies and cartoons and TV and computer games.... And just think

when the ReadMan goes obsolete, all the product that was written

for it will be blessedly gone forever!!! Erased from the memory

of mankind!

Now I'm the farthest thing from a Luddite ladies and gentlemen,

but when I contemplate this particular technical marvel my

author's blood runs cold... It's really hard for books to

compete with other multisensory media, with modern electronic

media, and this is supposed to be the panacea for withering

literature, but from the marrow of my bones I say get that

fucking little sarcophagus away from me. For God's sake don't

put my books into the Thomas Edison kinetoscope. Don't put me

into the stereograph, don't write me on the wax cylinder, don't

tie my words and my thoughts to the fate of a piece of hardware,

because hardware is even more mortal than I am, and I'm a hell

of a lot more mortal than I care to be. Mortality is one good

reason why I'm writing books in the first place. For God's sake

don't make me keep pace with the hardware, because I'm not

really in the business of keeping pace, I'm really in the

business of marking place.

Okay.... Now I've sometimes heard it asked why computer game

designers are deprived of the full artistic respect they

deserve. God knows they work hard enough. They're really

talented too, and by any objective measure of intelligence they

rank in the top percentiles... I've heard it said that maybe

this problem has something to do with the size of the author's

name on the front of the game-box. Or it's lone wolves versus

teams, and somehow the proper allotment of fame gets lost in the

muddle. One factor I don't see mentioned much is the sheer lack

of stability in your medium. A modern movie-maker could probably

make a pretty good film with DW Griffith's equipment, but you

folks are dwelling in the very maelstrom of Permanent

Technological Revolution. And that's a really cool place, but

man, it's just not a good place to build monuments.

Okay. Now I live in the same world you live in, I hope I've

demonstrated that I face a lot of the same problems you face...

Believe me there are few things deader or more obsolescent than

a science fiction novel that predicts the future when the future

has passed it by. Science fiction is a pop medium and a very

obsolescent medium. The fact that written science fiction is a

prose medium gives us some advantages, but even science fiction

has a hard time wrapping itself in the traditional mantle of

literary excellence... we try to do this sometimes, but

generally we have to be really drunk first. Still, if you want

your work to survive (and some science fiction *does* survive,

very successfully) then your work has to capture some quality

that lasts. You have to capture something that people will

search out over time, even though they have to fight their way

upstream against the whole rushing current of obsolescence and

innovation.

And I've come up with a strategy for attempting this. Maybe

it'll work -- probably it won't -- but I wouldn't be complaining

so loudly if I didn't have some kind of strategy, right? And I

think that my strategy may have some relevance to game designers

so I presume to offer it tonight.

This is the point at which your normal J. Random Author trots

out the doctrine of the Wonderful Power of Storytelling. Yes,

storytelling, the old myth around the campfire, blind Homer,

universal Shakespeare, this is the art ladies and gentlemen that

strikes to the eternal core of the human condition... This is

high art and if you don't have it you are dust in the wind.... I

can't tell you how many times I have heard this bullshit... This

is known in my field as the "Me and My Pal Bill Shakespeare"

argument. Since 1982 I have been at open war with people who

promulgate this doctrine in science fiction and this is the

primary reason why my colleagues in SF speak of me in fear and

trembling as a big bad cyberpunk... This is the classic doctrine

of Humanist SF.

This is what it sounds like when it's translated into your

jargon. Listen closely:

"Movies and plays get much of their power from the resonances

between the structural layers. The congruence between the theme,

plot, setting and character layouts generates emotional power.

Computer games will never have a significant theme level because

the outcome is variable. The lack of theme alone will limit the

storytelling power of computer games."

Hard to refute. Impossible to refute. Ladies and gentlemen to

hell with the marvellous power of storytelling. If the audience

for science fiction wanted *storytelling*, they wouldn't read

goddamned *science fiction,* they'd read Harpers and Redbook and

Argosy. The pulp magazine (which is our genre's primary example

of a dead platform) used to carry all kinds of storytelling.

Western stories. Sailor stories. Prizefighting stories. G-8 and

his battle aces. Spicy Garage Tales. Aryan Atrocity Adventures.

These things are dead. Stories didn't save them. Stories won't

save us. Stories won't save *you.*

This is not the route to follow. We're not into science fiction

because it's *good literature,* we're into it because it's

*weird*. Follow your weird, ladies and gentlemen. Forget trying

to pass for normal. Follow your geekdom. Embrace your nerditude.

In the immortal words of Lafcadio Hearn, a geek of incredible

obscurity whose work is still in print after a hundred years,

"woo the muse of the odd." A good science fiction story is not a

"good story" with a polite whiff of rocket fuel in it. A good

science fiction story is something that knows it is science

fiction and plunges through that and comes roaring out of the

other side. Computer entertainment should not be more like

movies, it shouldn't be more like books, it should be more like

computer entertainment, SO MUCH MORE LIKE COMPUTER ENTERTAINMENT

THAT IT RIPS THROUGH THE LIMITS AND IS SIMPLY IMPOSSIBLE TO

IGNORE!

I don't think you can last by meeting the contemporary public

taste, the taste from the last quarterly report. I don't think

you can last by following demographics and carefully meeting

expectations. I don't know many works of art that last that are

condescending. I don't know many works of art that last that are

deliberately stupid. You may be a geek, you may have geek

written all over you; you should aim to be one geek they'll

never forget. Don't aim to be civilized. Don't hope that

straight people will keep you on as some kind of pet. To hell

with them; they put you here. You should fully realize what

society has made of you and take a terrible revenge. Get weird.

Get way weird. Get dangerously weird. Get sophisticatedly,

thoroughly weird and don't do it halfway, put every ounce of

horsepower you have behind it. Have the artistic *courage* to

recognize your own significance in culture!

Okay. Those of you into SF may recognize the classic rhetoric of

cyberpunk here. Alienated punks, picking up computers, menacing

society.... That's the cliched press story, but they miss the

best half. Punk into cyber is interesting, but cyber into punk

is way dread. I'm into technical people who attack pop culture.

I'm into techies gone dingo, techies gone rogue -- not street

punks picking up any glittery junk that happens to be within

their reach -- but disciplined people, intelligent people,

people with some technical skills and some rational thought, who

can break out of the arid prison that this society sets for its

engineers. People who are, and I quote, "dismayed by nearly

every aspect of the world situation and aware on some nightmare

level that the solutions to our problems will not come from the

breed of dimwitted ad-men that we know as politicians." Thanks,

Brenda!

That still smells like hope to me....

You don't get there by acculturating. Don't become a

well-rounded person. Well rounded people are smooth and dull.

Become a thoroughly spiky person. Grow spikes from every angle.

Stick in their throats like a pufferfish. If you want to woo the

muse of the odd, don't read Shakespeare. Read Webster's revenge

plays. Don't read Homer and Aristotle. Read Herodotus where he's

off talking about Egyptian women having public sex with goats.

If you want to read about myth don't read Joseph Campbell, read

about convulsive religion, read about voodoo and the Millerites

and the Munster Anabaptists. There are hundreds of years of

extremities, there are vast legacies of mutants. There have

always been geeks. There will always be geeks. Become the

apotheosis of geek. Learn who your spiritual ancestors were. You

didn't come here from nowhere. There are reasons why you're

here. Learn those reasons. Learn about the stuff that was buried

because it was too experimental or embarrassing or inexplicable

or uncomfortable or dangerous.

And when it comes to studying art, well, study it, but study it

to your own purposes. If you're obsessively weird enough to be a

good weird artist, you generally face a basic problem. The basic

problem with weird art is not the height of the ceiling above

it, it's the pitfalls under its feet. The worst problem is the

blundering, the solecisms, the naivete of the poorly socialized,

the rotten spots that you skid over because you're too freaked

out and not paying proper attention. You may not need much

characterization in computer entertainment. Delineating

character may not be the point of your work. That's no excuse

for making lame characters that are actively bad. You may not

need a strong, supple, thoroughly worked-out storyline. That

doesn't mean that you can get away with a stupid plot made of

chickenwire and spit. Get a full repertoire of tools. Just make

sure you use those tools to the proper end. Aim for the heights

of professionalism. Just make sure you're a professional *game

designer.*

You can get a hell of a lot done in a popular medium just by

knocking it off with the bullshit. Popular media always reek of

bullshit, they reek of carelessness and self-taught clumsiness

and charlatanry. To live outside the aesthetic laws you must be

honest. Know what you're doing; don't settle for the way it

looks just cause everybody's used to it. If you've got a palette

of 2 million colors, then don't settle for designs that look

like a cheap four-color comic book. If you're gonna do graphic

design, then learn what good graphic design looks like; don't

screw around in amateur fashion out of sheer blithe ignorance.

If you write a manual, don't write a semiliterate manual with

bad grammar and misspellings. If you want to be taken seriously

by your fellows and by the populace at large, then don't give

people any excuse to dismiss you. Don't be your own worst enemy.

Don't put yourself down.

I have my own prejudices and probably more than my share, but I

still think these are pretty good principles. There's nothing

magic about 'em. They certainly don't guarantee success, but

then there's "success" and then there's success. Working

seriously, improving your taste and perception and

understanding, knowing what you are and where you came from, not

only improves your work in the present, but gives you a chance

of influencing the future and links you to the best work of the

past. It gives you a place to take a solid stand. I try to live

up to these principles; I can't say I've mastered them, but

they've certainly gotten me into some interesting places, and

among some very interesting company. Like the people here

tonight.

I'm not really here by any accident. I'm here because I'm

*paying attention.* I 'm here because I know you're significant.

I'm here because I know you're important. It was a privilege to

be here. Thanks very much for having me, and showing me what you

do.

That's all I have to say to you tonight. Thanks very much for

listening.

CyberView '91

They called it "CyberView '91." Actually, it was another "SummerCon" -- the traditional summer gathering of the American hacker underground. The organizer, 21 year old "Knight Lightning," had recently beaten a Computer Fraud and Abuse rap that might have put him in jail for thirty years. A little discretion seemed in order.

The convention hotel, a seedy but accommodating motor-inn outside the airport in St Louis, had hosted SummerCons before. Changing the name had been a good idea. If the staff were alert, and actually recognized that these were the same kids back again, things might get hairy.

The SummerCon '88 hotel was definitely out of bounds. The US Secret Service had set up shop in an informant's room that year, and videotaped the drunken antics of the now globally notorious "Legion of Doom" through a one-way mirror. The running of SummerCon '88 had constituted a major count of criminal conspiracy against young Knight Lightning, during his 1990 federal trial.

That hotel inspired sour memories. Besides, people already got plenty nervous playing "hunt the fed" at SummerCon gigs. SummerCons generally featured at least one active federal informant. Hackers and phone phreaks like to talk a lot. They talk about phones and computers -- and about each other.

For insiders, the world of computer hacking is a lot like Mexico. There's no middle class. There's a million little kids screwing around with their modems, trying to snitch long-distance phone-codes, trying to swipe pirated software -- the "kodez kidz" and "warez doodz." They're peons, "rodents."

Then there's a few earnest wannabes, up-and-comers, pupils. Not many. Less of 'em every year, lately.

And then there's the heavy dudes. The players. The Legion of Doom are definitely heavy. Germany's Chaos Computer Club are very heavy, and already back out on parole after their dire flirtation with the KGB. The Masters of Destruction in New York are a pain in the ass to their rivals in the underground, but ya gotta admit they are heavy. MoD's "Phiber Optik" has almost completed his public-service sentence, too... "Phoenix" and his crowd down in Australia used to be heavy, but nobody's heard much out of "Nom" and "Electron" since the Australian heat came down on them.

The people in Holland are very active, but somehow the Dutch hackers don't quite qualify as "heavy." Probably because computer-hacking is legal in Holland, and therefore nobody ever gets busted for it. The Dutch lack the proper bad attitude, somehow.

America's answer to the Dutch menace began arriving in a steady confusion of airport shuttle buses and college-kid decaying junkers. A software pirate, one of the more prosperous attendees, flaunted a radar-detecting black muscle-car. In some dim era before the jet age, this section of St Louis had been a mellow, fertile Samuel Clemens landscape. Waist-high summer weeds still flourished beside the four-lane highway and the airport feeder roads.

The graceless CyberView hotel had been slammed down onto this landscape as if dropped from a B-52. A small office-tower loomed in one corner beside a large parking garage. The rest was a rambling mess of long, narrow, dimly lit corridors, with a small swimming pool, a glass-fronted souvenir shop and a cheerless dining room. The hotel was clean enough, and the staff, despite provocation, proved adept at minding their own business. For their part, the hackers seemed quite fond of the place.

The term "hacker" has had a spotted history. Real "hackers," traditional "hackers," like to write software programs. They like to "grind code," plunging into its densest abstractions until the world outside the computer terminal bleaches away. Hackers tend to be portly white techies with thick fuzzy beards who talk entirely in jargon, stare into space a lot, and laugh briefly for no apparent reason. The CyberView crowd, though they call themselves "hackers," are better identified as computer intruders. They don't look, talk or act like 60s M.I.T.-style hackers.

Computer intruders of the 90s aren't stone pocket-protector techies. They're young white suburban males, and look harmless enough, but sneaky. They're much the kind of kid you might find skinny-dipping at 2AM in a backyard suburban swimming pool. The kind of kid who would freeze in the glare of the homeowner's flashlight, then frantically grab his pants and leap over the fence, leaving behind a half-empty bottle of tequila, a Metallica T-shirt, and, probably, his wallet.

One might wonder why, in the second decade of the personal-computer revolution, most computer intruders are still suburban teenage white whiz-kids. Hacking-as-computer-intrusion has been around long enough to have bred an entire generation of serious, heavy-duty adult computer-criminals. Basically, this simply hasn't occurred. Almost all computer intruders simply quit after age 22. They get bored with it, frankly. Sneaking around in other people's swimming pools simply loses its appeal. They get out of school. They get married. They buy their own swimming pools. They have to find some replica of a real life.

The Legion of Doom -- or rather, the Texas wing of LoD - had hit Saint Louis in high style, this weekend of June 22. The Legion of Doom has been characterized as "a high-tech street gang" by the Secret Service, but this is surely one of the leakiest, goofiest and best-publicized criminal conspiracies in American history.

Not much has been heard from Legion founder "Lex Luthor" in recent years. The Legion's Atlanta wing, "Prophet," "Leftist," and "Urvile," are just now getting out of various prisons and into Georgia halfway-houses. "Mentor" got married and writes science fiction games for a living.

But "Erik Bloodaxe," "Doc Holiday," and "Malefactor" were here -- in person, and in the current issues of TIME and NEWSWEEK. CyberView offered a swell opportunity for the Texan Doomsters to announce the formation of their latest high-tech, uhm, organization, "Comsec Data Security Corporation."

Comsec boasts a corporate office in Houston, and a marketing analyst, and a full-scale corporate computer-auditing program. The Legion boys are now digital guns for hire. If you're a well-heeled company, and you can cough up per diem and air-fare, the most notorious computer-hackers in America will show right up on your doorstep and put your digital house in order -- guaranteed.

Bloodaxe, a limber, strikingly handsome young Texan with shoulder-length blond hair, mirrored sunglasses, a tie, and a formidable gift of gab, did the talking. Before some thirty of his former peers, gathered upstairs over styrofoam coffee and canned Coke in the hotel's Mark Twain Suite, Bloodaxe sternly announced some home truths of modern computer security.

Most so-called "computer security experts" -- (Comsec's competitors) -- are overpriced con artists! They charge gullible corporations thousands of dollars a day, just to advise that management lock its doors at night and use paper shredders. Comsec Corp, on the other hand (with occasional consultant work from Messrs. "Pain Hertz" and "Prime Suspect") boasts America's most formidable pool of genuine expertise at actually breaking into computers.

Comsec, Bloodaxe continued smoothly, was not in the business of turning-in any former hacking compatriots. Just in case anybody here was, you know, worrying... On the other hand, any fool rash enough to challenge a Comsec-secured system had better be prepared for a serious hacker-to-hacker dust-up.

"Why would any company trust *you*?" someone asked languidly.

Malefactor, a muscular young Texan with close-cropped hair and the build of a linebacker, pointed out that, once hired, Comsec would be allowed inside the employer's computer system, and would have no reason at all to "break in." Besides, Comsec agents were to be licensed and bonded.

Bloodaxe insisted passionately that LoD were through with hacking for good. There was simply no future in it. The time had come for LoD to move on, and corporate consultation was their new frontier. (The career options of committed computer intruders are, when you come right down to it, remarkably slim.)

"We don't want to be flippin' burgers or sellin' life insurance when we're thirty," Bloodaxe drawled. "And wonderin' when Tim Foley is gonna come kickin' in the door!" (Special Agent Timothy M. Foley of the US Secret Service has fully earned his reputation as the most formidable anti-hacker cop in America.)

Bloodaxe sighed wistfully. "When I look back at my life... I can see I've essentially been in school for eleven years, teaching myself to be a computer security consultant."

After a bit more grilling, Bloodaxe finally got to the core of matters. Did anybody here hate them now? he asked, almost timidly. Did people think the Legion had sold out? Nobody offered this opinion. The hackers shook their heads, they looked down at their sneakers, they had another slug of Coke. They didn't seem to see how it would make much difference, really. Not at this point.

Over half the attendees of CyberView publicly claimed to be out of the hacking game now. At least one hacker present -- (who had shown up, for some reason known only to himself, wearing a blond wig and a dime-store tiara, and was now catching flung Cheetos in his styrofoam cup) -- already made his living "consulting" for private investigators.

Almost everybody at CyberView had been busted, had had their computers seized, or, had, at least, been interrogated - and then federal police put the squeeze on a teenage hacker, he generally spills his guts.

By '87, a mere year or so after they plunged seriously into anti-hacker enforcement, the Secret Service had workable dossiers on everybody that really mattered. By '89, they had files on practically every last soul in the American digital underground. The problem for law enforcement has never been finding out who the hackers are. The problem has been figuring out what the hell they're really up to, and, harder yet, trying to convince the public that it's actually important and dangerous to public safety.

From the point of view of hackers, the cops have been acting wacky lately. The cops, and their patrons in the telephone companies, just don't understand the modern world of computers, and they're scared. "They think there are masterminds running spy-rings who employ us," a hacker told me. "They don't understand that we don't do this for money, we do it for power and knowledge." Telephone security people who reach out to the underground are accused of divided loyalties and fired by panicked employers. A young Missourian coolly psychoanalyzed the opposition. "They're overdependent on things they don't understand. They've surrendered their lives to computers."

"Power and knowledge" may seem odd motivations. "Money" is a lot easier to understand. There are growing armies of professional thieves who rip-off phone service for money. Hackers, though, are into, well, power and knowledge. This has made them easier to catch than the street-hustlers who steal access codes at airports. It also makes them a lot scarier.

Take the increasingly dicey problems posed by "Bulletin Board Systems." "Boards" are home computers tied to home telephone lines, that can store and transmit data over the phone -- written texts, software programs, computer games, electronic mail. Boards were invented in the late 70s, and, while the vast majority of boards are utterly harmless, some few piratical boards swiftly became the very backbone of the 80s digital underground. Over half the attendees of CyberView ran their own boards. "Knight Lightning" had run an electronic magazine, "Phrack," that appeared on many underground boards across America.

Boards are mysterious. Boards are conspiratorial. Boards have been accused of harboring: Satanists, anarchists, thieves, child pornographers, Aryan nazis, religious cultists, drug dealers -- and, of course, software pirates, phone phreaks, and hackers. Underground hacker boards were scarcely reassuring, since they often sported terrifying sci-fi heavy-metal names, like "Speed Demon Elite," "Demon Roach Underground," and "Black Ice." (Modern hacker boards tend to feature defiant titles like "Uncensored BBS," "Free Speech," and "Fifth Amendment.")

Underground boards carry stuff as vile and scary as, say, 60s-era underground newspapers -- from the time when Yippies hit Chicago and ROLLING STONE gave away free roach-clips to subscribers. "Anarchy files" are popular features on outlaw boards, detailing how to build pipe-bombs, how to make Molotovs, how to brew methedrine and LSD, how to break and enter buildings, how to blow up bridges, the easiest ways to kill someone with a single blow of a blunt object -- and these boards bug straight people a lot. Never mind that all this data is publicly available in public libraries where it is protected by the First Amendment. There is something about its being on a computer -- where any teenage geek with a modem and keyboard can read it, and print it out, and spread it around, free as air -- there is something about that, that is creepy.

"Brad" is a New Age pagan from Saint Louis who runs a service known as "WEIRDBASE," available on an international network of boards called "FidoNet." Brad was mired in an interminable scandal when his readers formed a spontaneous underground railroad to help a New Age warlock smuggle his teenage daughter out of Texas, away from his fundamentalist Christian in-laws, who were utterly convinced that he had murdered his wife and intended to sacrifice his daughter to -- *Satan*! The scandal made local TV in Saint Louis. Cops came around and grilled Brad. The patchouli stench of Aleister Crowley hung heavy in the air. There was just no end to the hassle.

If you're into something goofy and dubious and you have a board about it, it can mean real trouble. Science-fiction game publisher Steve Jackson had his board seized in 1990. Some cryogenics people in California, who froze a woman for post-mortem preservation before she was officially, er, "dead," had their computers seized. People who sell dope-growing equipment have had their computers seized. In 1990, boards all over America went down: Illuminati, CLLI Code, Phoenix Project, Dr. Ripco. Computers are seized as "evidence," but since they can be kept indefinitely for study by police, this veers close to confiscation and punishment without trial. One good reason why Mitchell Kapor showed up at CyberView.

Mitch Kapor was the co-inventor of the mega-selling business program LOTUS 1-2-3 and the founder of the software giant, Lotus Development Corporation. He is currently the president of a newly-formed electronic civil liberties group, the Electronic Frontier Foundation. Kapor, now 40, customarily wears Hawaiian shirts and is your typical post-hippie cybernetic multimillionaire. He and EFF's chief legal counsel, "Johnny Mnemonic," had flown in for the gig in Kapor's private jet.

Kapor had been dragged willy-nilly into the toils of the digital underground when he received an unsolicited floppy-disk in the mail, from an outlaw group known as the "NuPrometheus League."

These rascals (still not apprehended) had stolen confidential proprietary software from Apple Computer, Inc., and were distributing it far and wide in order to blow Apple's trade secrets and humiliate the company. Kapor assumed that the disk was a joke, or, more likely, a clever scheme to infect his machines with a computer virus.

But when the FBI showed up, at Apple's behest, Kapor was shocked at the extent of their naivete. Here were these well-dressed federal officials, politely "Mr. Kapor"- ing him right and left, ready to carry out a war to the knife against evil marauding "hackers." They didn't seem to grasp that "hackers" had built the entire personal computer industry. Jobs was a hacker, Wozniak too, even Bill Gates, the youngest billionaire in the history of America -- all "hackers." The new buttoned-down regime at Apple had blown its top, and as for the feds, they were willing, but clueless. Well, let's be charitable - the feds were "cluefully challenged." "Clue-impaired." "Differently clued...."

Back in the 70s (as Kapor recited to the hushed and respectful young hackers) he himself had practiced "software piracy" - as those activities would be known today. Of course, back then, "computer software" hadn't been a major industry -- but today, "hackers" had police after them for doing things that the industry's own pioneers had pulled routinely. Kapor was irate about this. His own personal history, the lifestyle of his pioneering youth, was being smugly written out of the historical record by the latter-day corporate androids. Why, nowadays, people even blanched when Kapor forthrightly declared that he'd done LSD in the Sixties.

Quite a few of the younger hackers grew alarmed at this admission of Kapor's, and gazed at him in wonder, as if expecting him to explode.

"The law only has sledgehammers, when what we need are parking tickets and speeding tickets," Kapor said. Anti-hacker hysteria had gripped the nation in 1990. Huge law enforcement efforts had been mounted against illusory threats. In Washington DC, on the very day when the formation of the Electronic Frontier Foundation had been announced, a Congressional committee had been formally presented with the plotline of a thriller movie -- DIE HARD II, in which hacker terrorists seize an airport computer -- as if this Hollywood fantasy posed a clear and present danger to the American republic. A similar hacker thriller, WAR GAMES, had been presented to Congress in the mid-80s. Hysteria served no one's purposes, and created a stampede of foolish and unenforceable laws likely to do more harm than good.

Kapor didn't want to "paper over the differences" between his Foundation and the underground community. In the firm opinion of EFF, intruding into computers by stealth was morally wrong. Like stealing phone service, it deserved punishment. Not draconian ruthlessness, though. Not the ruination of a youngster's entire life.

After a lively and quite serious discussion of digital free-speech issues, the entire crew went to dinner at an Italian eatery in the local mall, on Kapor's capacious charge-tab. Having said his piece and listened with care, Kapor began glancing at his watch. Back in Boston, his six-year-old son was waiting at home, with a new Macintosh computer-game to tackle. A quick phone-call got the jet warmed up, and Kapor and his lawyer split town.

With the forces of conventionality -- such as they were - out of the picture, the Legion of Doom began to get heavily into "Mexican Flags." A Mexican Flag is a lethal, multi-layer concoction of red grenadine, white tequila and green creme-de-menthe. It is topped with a thin layer of 150 proof rum, set afire, and sucked up through straws.

The formal fire-and-straw ritual soon went by the board as things began to disintegrate. Wandering from room to room, the crowd became howlingly rowdy, though without creating trouble, as the CyberView crowd had wisely taken over an entire wing of the hotel.

"Crimson Death," a cheerful, baby-faced young hardware expert with a pierced nose and three earrings, attempted to hack the hotel's private phone system, but only succeeded in cutting off phone service to his own room.

Somebody announced there was a cop guarding the next wing of the hotel. Mild panic ensued. Drunken hackers crowded to the window.

A gentleman slipped quietly through the door of the next wing wearing a short terrycloth bathrobe and spangled silk boxer shorts.

Spouse-swappers had taken over the neighboring wing of the hotel, and were holding a private weekend orgy. It was a St Louis swingers' group. It turned out that the cop guarding the entrance way was an off-duty swinging cop. He'd angrily threatened to clobber Doc Holiday. Another swinger almost punched-out "Bill from RNOC," whose prurient hacker curiosity, naturally, knew no bounds.

It was not much of a contest. As the weekend wore on and the booze flowed freely, the hackers slowly but thoroughly infiltrated the hapless swingers, who proved surprisingly open and tolerant. At one point, they even invited a group of hackers to join in their revels, though "they had to bring their own women."

Despite the pulverizing effects of numerous Mexican Flags, Comsec Data Security seemed to be having very little trouble on that score. They'd vanished downtown brandishing their full-color photo in TIME magazine, and returned with an impressive depth-core sample of St Louis womanhood, one of whom, in an idle moment, broke into Doc Holiday's room, emptied his wallet, and stole his Sony tape recorder and all his shirts.

Events stopped dead for the season's final episode of STAR TREK: THE NEXT GENERATION. The show passed in rapt attention - then it was back to harassing the swingers. Bill from RNOC cunningly out-waited the swinger guards, infiltrated the building, and decorated all the closed doors with globs of mustard from a pump-bottle.

In the hungover glare of Sunday morning, a hacker proudly showed me a large handlettered placard reading PRIVATE -- STOP, which he had stolen from the unlucky swingers on his way out of their wing. Somehow, he had managed to work his way into the building, and had suavely ingratiated himself into a bedroom, where he had engaged a swinging airline ticket-agent in a long and most informative conversation about the security of airport computer terminals. The ticket agent's wife, at the time, was sprawled on the bed engaging in desultory oral sex with a third gentleman. It transpired that she herself did a lot of work on LOTUS 1-2-3. She was thrilled to hear that the program's inventor, Mitch Kapor, had been in that very hotel, that very weekend.

Mitch Kapor. Right over there? Here in St Louis? Wow. Isn't life strange.

Bruce Sterling

[email protected]

Literary Freeware: Not For Commercial Use

From THE MAGAZINE OF FANTASY AND SCIENCE FICTION, June 1992

F&SF, Box 56, Cornwall CT 06753 $26/yr; outside USA $31/yr

F&SF Science Column #1

OUTER CYBERSPACE

Dreaming of space-flight, and predicting its future, have

always been favorite pastimes of science fiction. In my first science

column for F&SF, I can't resist the urge to contribute a bit to this

grand tradition.

A science-fiction writer in 1991 has a profound advantage over

the genre's pioneers. Nowadays, space-exploration has a past as

well as a future. "The conquest of space" can be judged today, not

just by dreams, but by a real-life track record.

Some people sincerely believe that humanity's destiny lies in the

stars, and that humankind evolved from the primordial slime in order

to people the galaxy. These are interesting notions: mystical and

powerful ideas with an almost religious appeal. They also smack a

little of Marxist historical determinism, which is one reason why the

Soviets found them particularly attractive.

Americans can appreciate mystical blue-sky rhetoric as well as

anybody, but the philosophical glamor of "storming the cosmos"

wasn't enough to motivate an American space program all by itself.

Instead, the Space Race was a creation of the Cold War -- its course

was firmly set in the late '50s and early '60s. Americans went into

space *because* the Soviets had gone into space, and because the

Soviets were using Sputnik and Yuri Gagarin to make a case that

their way of life was superior to capitalism.

The Space Race was a symbolic tournament for the newfangled

intercontinental rockets whose primary purpose (up to that point) had

been as instruments of war. The Space Race was the harmless,

symbolic, touch-football version of World War III. For this reason

alone: that it did no harm, and helped avert a worse clash -- in my

opinion, the Space Race was worth every cent. But the fact that it was

a political competition had certain strange implications.

Because of this political aspect, NASA's primary product was

never actual "space exploration." Instead, NASA produced public-

relations spectaculars. The Apollo project was the premiere example.

The astonishing feat of landing men on the moon was a tremendous

public-relations achievement, and it pretty much crushed the Soviet

opposition, at least as far as "space-racing" went.

On the other hand, like most "spectaculars," Apollo delivered

rather little in the way of permanent achievement. There was flag-

waving, speeches, and plaque-laying; a lot of wonderful TV coverage;

and then the works went into mothballs. We no longer have the

capacity to fly human beings to the moon. No one else seems

particularly interested in repeating this feat, either; even though the

Europeans, Indians, Chinese and Japanese all have their own space

programs today. (Even the Arabs, Canadians, Australians and

Indonesians have their own satellites now.) 

In 1991, NASA remains firmly in the grip of the "Apollo

Paradigm." The assumption was (and is) that only large, spectacular

missions with human crews aboard can secure political support for

NASA, and deliver the necessary funding to support its eleven-billion-

dollar-a-year bureaucracy. "No Buck Rogers, no bucks."

The march of science -- the urge to actually find things out

about our solar system and our universe -- has never been the driving

force for NASA. NASA has been a very political animal; the space-

science community has fed on its scraps.

Unfortunately for NASA, a few historical home truths are

catching up with the high-tech white-knights.

First and foremost, the Space Race is over. There is no more

need for this particular tournament in 1992, because the Soviet

opposition is in abject ruins. The Americans won the Cold War. In

1992, everyone in the world knows this. And yet NASA is still running

space-race victory laps.

What's worse, the Space Shuttle, one of which blew up in 1986,

is clearly a white elephant. The Shuttle is overly complex, over-

designed, the creature of bureaucratic decision-making which tried to

provide all things for all constituents, and ended-up with an

unworkable monster. The Shuttle was grotesquely over-promoted,

and it will never fulfill the outrageous promises made for it in the '70s.

It's not and never will be a "space truck." It's rather more like a Ming

vase.

Space Station Freedom has very similar difficulties. It costs far

too much, and is destroying other and more useful possibilities for

space activity. Since the Shuttle takes up half NASA's current budget,

the Shuttle and the Space Station together will devour most *all* of

NASA's budget for *years to come* -- barring unlikely large-scale

increases in funding.

Even as a political stage-show, the Space Station is a bad bet,

because the Space Station cannot capture the public imagination.

Very few people are honestly excited about this prospect. The Soviets

*already have* a space station. They've had a space station for years

now. Nobody cares about it. It never gets headlines. It inspires not

awe but tepid public indifference. Rumor has it that the Soviets (or

rather, the *former* Soviets) are willing to sell their "Space Station

Peace" to any bidder for eight hundred million dollars, about one

fortieth of what "Space Station Freedom" will cost -- and nobody can

be bothered to buy it!

Manned space exploration itself has been oversold. Space-

flight is simply not like other forms of "exploring." "Exploring"

generally implies that you're going to venture out someplace, and

tangle hand-to-hand with wonderful stuff you know nothing about.

Manned space flight, on the other hand, is one of the most closely

regimented of human activities. Most everything that is to happen on

a manned space flight is already known far in advance. (Anything not

predicted, not carefully calculated beforehand, is very likely to be a

lethal catastrophe.)

Reading the personal accounts of astronauts does not reveal

much in the way of "adventure" as that idea has been generally

understood. On the contrary, the historical and personal record

reveals that astronauts are highly trained technicians whose primary

motivation is not to "boldly go where no one has gone before," but

rather to do *exactly what is necessary* and above all *not to mess up

the hardware.*

Astronauts are not like Lewis and Clark. Astronauts are the

tiny peak of a vast human pyramid of earth-bound technicians and

mission micro-managers. They are kept on a very tight

(*necessarily* tight) electronic leash by Ground Control. And they

are separated from the environments they explore by a thick chrysalis

of space-suits and space vehicles. They don't tackle the challenges of

alien environments, hand-to-hand -- instead, they mostly tackle the

challenges of their own complex and expensive life-support

machinery.

The years of manned space-flight have provided us with the

interesting discovery that life in free-fall is not very good for people.

People in free-fall lose calcium from their bones -- about half a percent

of it per month. Having calcium leach out of one's bones is the same

grim phenomenon that causes osteoporosis in the elderly --

"dowager's hump." It makes one's bones brittle. No one knows quite

how bad this syndrome can get, since no one has been in orbit much

longer than a year; but after a year, the loss of calcium shows no

particular sign of slowing down. The human heart shrinks in free-

fall, along with a general loss of muscle tone and muscle mass. This

loss of muscle, over a period of months in orbit, causes astronauts and

cosmonauts to feel generally run-down and feeble.

There are other syndromes as well. Lack of gravity causes

blood to pool in the head and upper chest, producing the pumpkin-

faced look familiar from Shuttle videos. Eventually, the body reacts

to this congestion by reducing the volume of blood. The long-term

effects of this are poorly understood. About this time, red blood cell

production falls off in the bone marrow. Those red blood cells which

are produced in free-fall tend to be interestingly malformed.

And then, of course, there's the radiation hazard. No one in

space has been severely nuked yet, but if a solar flare caught a crew in

deep space, the results could be lethal.

These are not insurmountable medical challenges, but they

*are* real problems in real-life space experience. Actually, it's rather

surprising that an organism that evolved for billions of years in

gravity can survive *at all* in free-fall. It's a tribute to human

strength and plasticity that we can survive and thrive for quite a

while without any gravity. However, we now know what it would be

like to settle in space for long periods. It's neither easy nor pleasant.

And yet, NASA is still committed to putting people in space.

They're not quite sure why people should go there, nor what people

will do in space once they're there, but they are bound and determined

to do this despite all obstacles.

If there were big money to be made from settling people in

space, that would be a different prospect. A commercial career in

free-fall would probably be safer, happier, and more rewarding than,

say, bomb-disposal, or test-pilot work, or maybe even coal-mining.

But the only real moneymaker in space commerce (to date, at least) is

the communications satellite industry. The comsat industry wants

nothing to do with people in orbit.

Consider this: it costs $200 million to make one shuttle flight.

For $200 million you can start your own communications satellite

business, just like GE, AT&T, GTE and Hughes Aircraft. You can join

the global Intelsat consortium and make a hefty 14% regulated profit

in the telecommunications business, year after year. You can do quite

well by "space commerce," thank you very much, and thousands of

people thrive today by commercializing space. But the Space Shuttle,

with humans aboard, costs $30 million a day! There's nothing you can

make or do on the Shuttle that will remotely repay that investment.

After years of Shuttle flights, there is still not one single serious

commercial industry anywhere whose business it is to rent workspace

or make products or services on the Shuttle.

The era of manned spectaculars is visibly dying by inches. It's

interesting to note that a quarter of the top and middle management

of NASA, the heroes of Apollo and its stalwarts of tradition, are

currently eligible for retirement. By the turn of the century, more than

three-quarters of the old guard will be gone.

This grim and rather cynical recital may seem a dismal prospect

for space enthusiasts, but the situation's not actually all that dismal at

all. In the meantime, unmanned space development has quietly

continued apace. It's a little known fact that America's *military*

space budget today is *twice the size* of NASA's entire budget! This

is the poorly publicized, hush-hush, national security budget for

militarily vital technologies like America's "national technical means

of verification," i.e. spy satellites. And then there are military

navigational aids like Navstar, a relatively obscure but very

impressive national asset. The much-promoted Strategic Defence

Initiative is a Cold War boondoggle, and SDI is almost surely not long

for this world, in either budgets or rhetoric -- but both Navstar and

spy satellites have very promising futures, in and/or out of the

military. They promise and deliver solid and useful achievements,

and are in no danger of being abandoned.

And communications satellites have come a very long way since

Telstar; the Intelsat 6 model, for instance, can carry thirty thousand

simultaneous phone calls plus three channels of cable television.

There is enormous room for technical improvement in comsat

technologies; they have a well-established market, much pent-up

demand, and are likely to improve drastically in the future. (The

satellite launch business is no longer a superpower monopoly; comsats

are being launched by Chinese and Europeans. Newly independent

Kazakhstan, home of the Soviet launching facilities at Baikonur, is

anxious to enter the business.)

Weather satellites have proven vital to public safety and

commercial prosperity. NASA or no NASA, money will be found to

keep weather satellites in orbit and improve them technically -- not

for reasons of national prestige or flag-waving status, but because it

makes a lot of common sense and it really pays.

But a look at the budget decisions for 1992 shows that the

Apollo Paradigm still rules at NASA. NASA is still utterly determined

to put human beings in space, and actual space science gravely suffers

for this decision. Planetary exploration, life science missions, and

astronomical surveys (all unmanned) have been cancelled, or

curtailed, or delayed in the1992 budget. All this, in the hope of

continuing the big-ticket manned 50-billion-dollar Space Shuttle, and

of building the manned 30-billion-dollar Space Station Freedom.

The dire list of NASA's sacrifices for 1992 includes an asteroid

probe; an advanced x-ray astronomy facility; a space infrared

telescope; and an orbital unmanned solar laboratory. We would have

learned a very great deal from these projects (assuming that they

would have actually worked). The Shuttle and the Station, in stark

contrast, will show us very little that we haven't already seen.

There is nothing inevitable about these decisions, about this

strategy. With imagination, with a change of emphasis, the

exploration of space could take a very different course.

In 1951, when writing his seminal non-fiction work THE

EXPLORATION OF SPACE, Arthur C. Clarke created a fine

imaginative scenario of unmanned spaceflight.

"Let us imagine that such a vehicle is circling Mars," Clarke

speculated. "Under the guidance of a tiny yet extremely complex

electronic brain, the missile is now surveying the planet at close

quarters. A camera is photographing the landscape below, and the

resulting pictures are being transmitted to the distant Earth along a

narrow radio beam. It is unlikely that true television will be possible,

with an apparatus as small as this, over such ranges. The best that

could be expected is that still pictures could be transmitted at intervals

of a few minutes, which would be quite adequate for most purposes."

This is probably as close as a science fiction writer can come to

true prescience. It's astonishingly close to the true-life facts of the

early Mars probes. Mr. Clarke well understood the principles and

possibilities of interplanetary rocketry, but like the rest of mankind in

1951, he somewhat underestimated the long-term potentials of that

"tiny but extremely complex electronic brain" -- as well as that of

"true television." In the 1990s, the technologies of rocketry have

effectively stalled; but the technologies of "electronic brains" and

electronic media are exploding exponentially.

Advances in computers and communications now make it

possible to speculate on the future of "space exploration" along

entirely novel lines. Let us now imagine that Mars is under thorough

exploration, sometime in the first quarter of the twenty-first century.

However, there is no "Martian colony." There are no three-stage

rockets, no pressure-domes, no tractor-trailers, no human settlers.

Instead, there are hundreds of insect-sized robots, every one of

them equipped not merely with "true television," but something much

more advanced. They are equipped for *telepresence.* A human

operator can see what they see, hear what they hear, even guide them

about at will (granted, of course, that there is a steep transmission

lag). These micro-rovers, crammed with cheap microchips and laser

photo-optics, are so exquisitely monitored that one can actually *feel*

the Martian grit beneath their little scuttling claws. Piloting one of

these babies down the Valles Marineris, or perhaps some unknown

cranny of the Moon -- now *that* really feels like "exploration." If

they were cheap enough, you could dune-buggy them.

No one lives in space stations, in this scenario. Instead, our

entire solar system is saturated with cheap monitoring devices. There

are no "rockets" any more. Most of these robot surrogates weigh less

than a kilogram. They are fired into orbit by small rail-guns mounted

on high-flying aircraft. Or perhaps they're launched by laser-ignition:

ground-based heat-beams that focus on small reaction-chambers and

provide their thrust. They might even be literally shot into orbit by

Jules Vernian "space guns" that use the intriguing, dirt-cheap

technology of Gerald Bull's Iraqi "super-cannon." This wacky but

promising technique would be utterly impractical for launching human

beings, since the acceleration g-load would shatter every bone in their

bodies; but these little machines are *tough.*

And small robots have many other advantages. Unlike manned

craft, robots can go into harm's way: into Jupiter's radiation belts, or

into the shrapnel-heavy rings of Saturn, or onto the acid-bitten

smoldering surface of Venus. They stay on their missions,

operational, not for mere days or weeks, but for decades. They are

extensions, not of human population, but of human senses.

And because they are small and numerous, they should be

cheap. The entire point of this scenario is to create a new kind of

space-probe that is cheap, small, disposable, and numerous: as cheap

and disposable as their parent technologies, microchips and video,

while taking advantage of new materials like carbon-fiber, fiber-

optics, ceramic, and artificial diamond.

The core idea of this particular vision is "fast, cheap, and out of

control." Instead of gigantic, costly, ultra-high-tech, one-shot efforts

like NASA's Hubble Telescope (crippled by bad optics) or NASA's

Galileo (currently crippled by a flaw in its communications antenna)

these micro-rovers are cheap, and legion, and everywhere. They get

crippled every day; but it doesn't matter much; there are hundreds

more, and no one's life is at stake. People, even quite ordinary people,

*rent time on them* in much the same way that you would pay for

satellite cable-TV service. If you want to know what Neptune looks

like today, you just call up a data center and *have a look for

yourself.*

This is a concept that would truly involve "the public" in space

exploration, rather than the necessarily tiny elite of astronauts. This

is a potential benefit that we might derive from abandoning the

expensive practice of launching actual human bodies into space. We

might find a useful analogy in the computer revolution: "mainframe"

space exploration, run by a NASA elite in labcoats, is replaced by a

"personal" space exploration run by grad students and even hobbyists.

In this scenario, "space exploration" becomes similar to other

digitized, computer-assisted media environments: scientific

visualization, computer graphics, virtual reality, telepresence. The

solar system is saturated, not by people, but by *media coverage.

Outer space becomes *outer cyberspace.*

Whether this scenario is "realistic" isn't clear as yet. It's just a

science-fictional dream, a vision for the exploration of space:

*circumsolar telepresence.* As always, much depends on

circumstance, lucky accidents, and imponderables like political will.

What does seem clear, however, is that NASA's own current plans are

terribly far-fetched: they have outlived all contact with the political,

economic, social and even technical realities of the 1990s. There is no

longer any real point in shipping human beings into space in order to

wave flags.

"Exploring space" is not an "unrealistic" idea. That much, at

least, has already been proven. The struggle now is over why and

how and to what end. True, "exploring space" is not as "important"

as was the life-and-death Space Race struggle for Cold War pre-

eminence. Space science cannot realistically expect to command the

huge sums that NASA commanded in the service of American political

prestige. That era is simply gone; it's history now.

However: astronomy does count. There is a very deep and

genuine interest in these topics. An interest in the stars and planets is

not a fluke, it's not freakish. Astronomy is the most ancient of human

sciences. It's deeply rooted in the human psyche, has great historical

continuity, and is spread all over the world. It has its own

constituency, and if its plans were modest and workable, and played

to visible strengths, they might well succeed brilliantly.

The world doesn't actually need NASA's billions to learn about

our solar system. Real, honest-to-goodness "space exploration"

never got more than a fraction of NASA's budget in the first place.

Projects of this sort would no longer be created by gigantic

federal military-industrial bureaucracies. Micro-rover projects could

be carried out by universities, astronomy departments, and small-

scale research consortia. It would play from the impressive strengths

of the thriving communications and computer tech of the nineties,

rather than the dying, centralized, militarized, politicized rocket-tech

of the sixties.

The task at hand is to create a change in the climate of opinion

about the true potentials of "space exploration." Space exploration,

like the rest of us, grew up in the Cold War; like the rest of us, it must

now find a new way to live. And, as history has proven, science fiction

has a very real and influential role in space exploration. History

shows that true space exploration is not about budgets. It's about

vision. At its heart it has always been about vision.

Let's create the vision.

BUCKYMANIA

From THE MAGAZINE OF FANTASY AND SCIENCE FICTION, July 1992

F&SF Box 56 Cornwall CT 06753 $26/yr; outside USA $31/yr

F&SF Column #2

Carbon, like every other element on this planet, came to us from

outer space. Carbon and its compounds are well-known in galactic

gas-clouds, and in the atmosphere and core of stars, which burn

helium to produce carbon. Carbon is the sixth element in the periodic

table, and forms about two-tenths of one percent of Earth's crust.

Earth's biosphere (most everything that grows, moves, breathes,

photosynthesizes, or reads F&SF) is constructed mostly of

waterlogged carbon, with a little nitrogen, phosphorus and such for

leavening.

There are over a million known and catalogued compounds of

carbon: the study of these compounds, and their profuse and intricate

behavior, forms the major field of science known as organic

chemistry.

Since prehistory, "pure" carbon has been known to humankind

in three basic flavors. First, there's smut (lampblack or "amorphous

carbon"). Then there's graphite: soft, grayish-black, shiny stuff --

(pencil "lead" and lubricant). And third is that surpassing anomaly,

"diamond," which comes in extremely hard translucent crystals.

Smut is carbon atoms that are poorly linked. Graphite is carbon

atoms neatly linked in flat sheets. Diamond is carbon linked in strong,

regular, three-dimensional lattices: tetrahedra, that form ultrasolid

little carbon pyramids.

Today, however, humanity rejoices in possession of a fourth

and historically unprecedented form of carbon. Researchers have

created an entire class of these simon-pure carbon molecules, now

collectively known as the "fullerenes." They were named in August

1985, in Houston, Texas, in honor of the American engineer, inventor,

and delphically visionary philosopher, R. Buckminster Fuller.

"Buckminsterfullerene," or C60, is the best-known fullerene.

It's very round, the roundest molecule known to science. Sporting

what is technically known as "truncated icosahedral structure," C60 is

the most symmetric molecule possible in three-dimensional Euclidean

space. Each and every molecule of "Buckminsterfullerene" is a

hollow, geodesic sphere of sixty carbon atoms, all identically linked in

a spherical framework of twelve pentagons and twenty hexagons.

This molecule looks exactly like a common soccerball, and was

therefore nicknamed a "buckyball" by delighted chemists.

A free buckyball rotates merrily through space at one hundred

million revolutions per second. It's just over one nanometer across.

Buckminsterfullerene by the gross forms a solid crystal, is stable at

room temperature, and is an attractive mustard-yellow color. A heap

of crystallized buckyballs stack very much like pool balls, and are as

soft as graphite. It's thought that buckyballs will make good

lubricants -- something like molecular ball bearings.

When compressed, crystallized buckyballs squash and flatten

readily, down to about seventy percent of their volume. They then

refused to move any further and become extremely hard. Just *how*

hard is not yet established, but according to chemical theory,

compressed buckyballs may be considerably harder than diamond.

They may make good shock absorbers, or good armor.

But this is only the beginning of carbon's multifarious oddities in

the playful buckyball field. Because buckyballs are hollow, their

carbon framework can be wrapped around other, entirely different

atoms, forming neat molecular cages. This has already been

successfully done with certain metals, creating the intriguing new

class of "metallofullerites." Then there are buckyballs with a carbon or

two knocked out of the framework, and replaced with metal atoms.

This "doping" process yields a galaxy of so-called "dopeyballs." Some

of these dopeyballs show great promise as superconductors. Other

altered buckyballs seem to be organic ferromagnets.

A thin film of buckyballs can double the frequency of laser light

passing through it. Twisted or deformed buckyballs might act as

optical switches for future fiber-optic networks. Buckyballs with

dangling branches of nickel, palladium, or platinum may serve as new

industrial catalysts.

The electrical properties of buckyballs and their associated

compounds are very unusual, and therefore very promising. Pure C60

is an insulator. Add three potassium atoms, and it becomes a low-

temperature superconductor. Add three more potassium atoms, and it

becomes an insulator again! There's already excited talk in industry of

making electrical batteries out of buckyballs.

Then there are the "buckybabies:" C28, C32, C44, and C52. The

lumpy, angular buckybabies have received very little study to date,

and heaven only knows what they're capable of, especially when

doped, bleached, twisted, frozen or magnetized. And then there are

the *big* buckyballs: C240, C540, C960. Molecular models of these

monster buckyballs look like giant chickenwire beachballs.

There doesn't seem to be any limit to the upper size of a

buckyball. If wrapped around one another for internal support,

buckyballs can (at least theoretically) accrete like pearls. A truly

titanic buckyball might be big enough to see with the naked eye.

Conceivably, it might even be big enough to kick around on a playing

field, if you didn't mind kicking an anomalous entity with unknown

physical properties.

Carbon-fiber is a high-tech construction material which has

been seeing a lot of use lately in tennis rackets, bicycles, and high-

performance aircraft. It's already the strongest fiber known. This

makes the discovery of "buckytubes" even more striking. A buckytube

is carbon-fiber with a difference: it's a buckyball extruded into a long

continuous cylinder comprised of one single superstrong molecule.

C70, a buckyball cousin shaped like a rugby ball, seems to be

useful in producing high-tech films of artificial diamond. Then there

are "fuzzyballs" with sixty strands of hydrogen hair, "bunnyballs"

with twin ears of butylpyridine, flourinated "teflonballs" that may be

the slipperiest molecules ever produced.

This sudden wealth of new high-tech slang indicates the

potential riches of this new and multidisciplinary field of study, where

physics, electronics, chemistry and materials-science are all

overlapping, right now, in an exhilirating microsoccerball

scrimmage.

Today there are more than fifty different teams of scientists

investigating buckyballs and their relations, including industrial

heavy-hitters from AT&T, IBM and Exxon. SCIENCE magazine

voted buckminsterfullerene "Molecule of the Year" in 1991. Buckyball

papers have also appeared in NATURE, NEW SCIENTIST,

SCIENTIFIC AMERICAN, even FORTUNE and BUSINESS WEEK.

Buckyball breakthroughs are coming well-nigh every week, while the

fax machines sizzle in labs around the world. Buckyballs are strange,

elegant, beautiful, very intellectually sexy, and will soon be

commercially hot.

In chemical terms, the discovery of buckminsterfullerene -- a

carbon sphere -- may well rank with the discovery of the benzene ring

-- a carbon ring -- in the 19th century. The benzene ring (C6H6)

brought the huge field of aromatic chemistry into being, and with it a

enormous number of industrial applications.

But what was this "discovery," and how did it come about?

In a sense, like carbon itself, buckyballs also came to us from

outer space. Donald Huffman and Wolfgang Kratschmer were

astrophysicists studying interstellar soot. Huffman worked for the

University of Arizona in Tucson, Kratschmer for the Max Planck

Institute in Heidelberg. In 1982, these two gentlemen were

superheating graphite rods in a low-pressure helium atmosphere,

trying to replicate possible soot-making conditions in the atmosphere

of red-giant stars. Their experiment was run in a modest bell-jar

zapping apparatus about the size and shape of a washing-machine.

Among a great deal of black gunk, they actually manufactured

miniscule traces of buckminsterfullerene, which behaved oddly in their

spectrometer. At the time, however, they didn't realize what they

had.

In 1985, buckministerfullerene surfaced again, this time in a

high-tech laser-vaporization cluster-beam apparatus. Robert Curl

and Richard Smalley, two professors of chemistry at Rice University

in Houston, knew that a round carbon molecule was theoretically

possible. They even knew that it was likely to be yellow in color. And

in August 1985, they made a few nanograms of it, detected it with

mass spectrometers, and had the honor of naming it, along with their

colleagues Harry Kroto, Jim Heath and Sean O'Brien.

In 1985, however, there wasn't enough buckminsterfullerene

around to do much more than theorize about. It was "discovered,"

and named, and argued about in scientific journals, and was an

intriguing intellectual curiosity. But this exotic substance remained

little more than a lab freak.

And there the situation languished. But in 1988, Huffman and

Kratschmer, the astrophysicists, suddenly caught on: this "C60" from

the chemists in Houston, was probably the very same stuff they'd

made by a different process, back in 1982. Harry Kroto, who had

moved to the University of Sussex in the meantime, replicated their

results in his own machine in England, and was soon producing

enough buckminsterfullerene to actually weigh on a scale, and

measure, and purify!

The Huffman/Kratschmer process made buckminsterfullerene

by whole milligrams. Wow! Now the entire arsenal of modern

chemistry could be brought to bear: X-ray diffraction,

crystallography, nuclear magnetic resonance, chromatography. And

results came swiftly, and were published. Not only were buckyballs

real, they were weird and wonderful.

In 1990, the Rice team discovered a yet simpler method to make

buckyballs, the so-called "fullerene factory." In a thin helium

atmosphere inside a metal tank, a graphite rod is placed near a

graphite disk. Enough simple, brute electrical power is blasted

through the graphite to generate an electrical arc between the disk

and the tip of the rod. When the end of the rod boils off, you just crank

the stub a little closer and turn up the juice. The resultant exotic soot,

which collects on the metal walls of the chamber, is up to 45 percent

buckyballs.

In 1990, the buckyball field flung open its stadium doors for

anybody with a few gas-valves and enough credit for a big electric

bill. These buckyball "factories" sprang up all over the world in 1990

and '91. The "discovery" of buckminsterfullerene was not the big kick-

off in this particular endeavour. What really counted was the budget,

the simplicity of manufacturing. It wasn't the intellectual

breakthrough that made buckyballs a sport -- it was the cheap ticket in

through the gates. With cheap and easy buckyballs available, the

research scene exploded.

Sometimes Science, like other overglamorized forms of human

endeavor, marches on its stomach.

As I write this, pure buckyballs are sold commercially for about

$2000 a gram, but the market price is in free-fall. Chemists suggest

that buckmisterfullerene will be as cheap as aluminum some day soon

-- a few bucks a pound. Buckyballs will be a bulk commodity, like

oatmeal. You may even *eat* them some day -- they're not

poisonous, and they seem to offer a handy way to package certain

drugs.

Buckminsterfullerene may have been "born" in an interstellar

star-lab, but it'll become a part of everyday life, your life and my life,

like nylon, or latex, or polyester. It may become more famous, and

will almost certainly have far more social impact, than Buckminster

Fuller's own geodesic domes, those glamorously high-tech structures

of the 60s that were the prophetic vision for their molecule-size

counterparts.

This whole exciting buckyball scrimmage will almost certainly

bring us amazing products yet undreamt-of, everything from grease

to superhard steels. And, inevitably, it will bring a concomitant set of

new problems -- buckyball junk, perhaps, or bizarre new forms of

pollution, or sinister military applications. This is the way of the

world.

But maybe the most remarkable thing about this peculiar and

elaborate process of scientific development is that buckyballs never

were really "exotic" in the first place. Now that sustained attention

has been brought to bear on the phenomenon, it appears that

buckyballs are naturally present -- in tiny amounts, that is -- in almost

any sooty, smoky flame. Buckyballs fly when you light a candle, they

flew when Bogie lit a cigarette in "Casablanca," they flew when

Neanderthals roasted mammoth fat over the cave fire. Soot we knew

about, diamonds we prized -- but all this time, carbon, good ol'

Element Six, has had a shocking clandestine existence. The "secret"

was always there, right in the air, all around all of us.

But when you come right down to it, it doesn't really matter

how we found out about buckyballs. Accidents are not only fun, but

crucial to the so-called march of science, a march that often moves

fastest when it's stumbling down some strange gully that no one knew

existed. Scientists are human beings, and human beings are flexible:

not a hard, rigidly locked crystal like diamond, but a resilient network.

It's a legitimate and vital part of science to recognize the truth -- not

merely when looking for it with brows furrowed and teeth clenched,

but when tripping over it headlong.

Thanks to science, we did find out the truth. And now it's all

different. Because now we know!

THINK OF THE PRESTIGE

From THE MAGAZINE OF FANTASY AND SCIENCE FICTION, Sept 1992.

F&SF, Box 56, Cornwall CT 06753 $26/yr; outside US $31/yr

F&SF Science Column #3

The science of rocketry, and the science of weaponry, are sister

sciences. It's been cynically said of German rocket scientist Wernher

von Braun that "he aimed at the stars, and hit London."

After 1945, Wernher von Braun made a successful transition to

American patronage and, eventually, to civilian space exploration.

But another ambitious space pioneer -- an American citizen -- was

not so lucky as von Braun, though his equal in scientific talent. His

story, by comparison, is little known.

Gerald Vincent Bull was born in March 9, 1928, in Ontario,

Canada. He died in 1990. Dr. Bull was the most brilliant artillery

scientist of the twentieth century. Bull was a prodigiously gifted

student, and earned a Ph.D. in aeronautical engineering at the age of 24.

Bull spent the 1950s researching supersonic aerodynamics in

Canada, personally handcrafting some of the most advanced wind-

tunnels in the world.

Bull's work, like that of his predecessor von Braun, had military

applications. Bull found patronage with the Canadian Armament

Research and Development Establishment (CARDE) and the

Canadian Defence Research Board.

However, Canada's military-industrial complex lacked the

panache, and the funding, of that of the United States. Bull, a

visionary and energetic man, grew impatient with what he considered

the pedestrian pace and limited imagination of the Canadians. As an

aerodynamics scientist for CARDE, Bull's salary in 1959 was only

$17,000. In comparison, in 1961 Bull earned $100,000 by consulting for

the Pentagon on nose-cone research. It was small wonder that by the

early 1960s, Bull had established lively professional relationships with

the US Army's Ballistics Research Laboratory (as well as the Army's

Redstone Arsenal, Wernher von Braun's own postwar stomping

grounds).

It was the great dream of Bull's life to fire cannon projectiles

from the earth's surface directly into outer space. Amazingly, Dr.

Bull enjoyed considerable success in this endeavor. In 1961, Bull

established Project HARP (High Altitude Research Project). HARP

was an academic, nonmilitary research program, funded by McGill

University in Montreal, where Bull had become a professor in the

mechanical engineering department. The US Army's Ballistic

Research Lab was a quiet but very useful co-sponsor of HARP; the US

Army was especially generous in supplying Bull with obsolete military

equipment, including cannon barrels and radar.

Project HARP found a home on the island of Barbados,

downrange of its much better-known (and vastly better-financed)

rival, Cape Canaveral. In Barbados, Bull's gigantic space-cannon

fired its projectiles out to an ocean splashdown, with little risk of

public harm. Its terrific boom was audible all over Barbados, but the

locals were much pleased at their glamorous link to the dawning

Space Age.

Bull designed a series of new supersonic shells known as the

"Martlets." The Mark II Martlets were cylindrical finned projectiles,

about eight inches wide and five feet six inches long. They weighed

475 pounds. Inside the barrel of the space-cannon, a Martlet was

surrounded by a precisely machined wooden casing known as a

"sabot." The sabot soaked up combustive energy as the projectile

flew up the space-cannon's sixteen-inch, 118-ft long barrel. As it

cleared the barrel, the sabot split and the precisely streamlined

Martlet was off at over a mile per second. Each shot produced a huge

explosion and a plume of fire gushing hundreds of feet into the sky.

The Martlets were scientific research craft. They were

designed to carry payloads of metallic chaff, chemical smoke, or

meteorological balloons. They sported telemetry antennas for tracing

the flight.

By the end of 1965, the HARP project had fired over a hundred

such missiles over fifty miles high, into the ionosphere -- the airless

fringes of space. In November 19, 1966, the US Army's Ballistics

Research Lab, using a HARP gun designed by Bull, fired a 185-lb

Martlet missile one hundred and eleven miles high. This was, and

remains, a world altitude record for any fired projectile. Bull now

entertained ambitious plans for a Martlet Mark IV, a rocket-assisted

projectile that would ignite in flight and drive itself into actual orbit.

Ballistically speaking, space cannon offer distinct advantages

over rockets. Rockets must lift, not only their own weight, but the

weight of their fuel and oxidizer. Cannon "fuel," which is contained

within the gunbarrel, offers far more explosive bang for the buck than

rocket fuel. Cannon projectiles are very accurate, thanks to the fixed

geometry of the gun-barrel. And cannon are far simpler and cheaper

than rockets.

There are grave disadvantages, of course. First, the payload

must be slender enough to fit into a gun-barrel. The most severe

drawback is the huge acceleration force of a cannon blast, which in the

case of Bull's exotic arsenal could top 10,000 Gs. This rules out

manned flights from the mouth of space-cannon. Jules Verne

overlooked this unpoetic detail when he wrote his prescient tale of

space artillery, FROM THE EARTH TO THE MOON (1865). (Dr Bull

was fascinated by Verne, and often spoke of Verne's science fiction as

one of the foremost inspirations of his youth.)

Bull was determined to put a cannon-round into orbit. This

burning desire of his was something greater than any merely

pragmatic or rational motive. The collapse of the HARP project in

1967 left Bull in command of his own fortunes. He reassembled the

wreckage of his odd academic/military career, and started a

commercial operation, "Space Research Corporation." In the years

to follow, Bull would try hard to sell his space-cannon vision to a

number of sponsors, including NATO, the Pentagon, Canada, China,

Israel, and finally, Iraq.

In the meantime, the Vietnam War was raging. Bull's

researches on projectile aerodynamics had made him, and his

company Space Reseach Corporation, into a hot military-industrial

property. In pursuit of space research, Bull had invented techniques

that lent much greater range and accuracy to conventional artillery

rounds. With Bull's ammunition, for instance, US Naval destroyers

would be able to cruise miles off the shore of North Vietnam,

destroying the best Russian-made shore batteries without any fear of

artillery retaliation. Bull's Space Research Corporation was

manufacturing the necessary long-range shells in Canada, but his lack

of American citizenship was a hindrance in the Pentagon arms trade.

Such was Dr. Bull's perceived strategic importance that this

hindrance was neatly avoided; with the sponsorship of Senator Barry

Goldwater, Bull became an American citizen by act of Congress. This

procedure was a rare honor, previously reserved only for Winston

Churchill and the Marquis de Lafayette.

Despite this Senatorial fiat, however, the Navy arms deal

eventually fell through. But although the US Navy scorned Dr. Bull's

wares, others were not so short-sighted. Bull's extended-range

ammunition, and the murderously brilliant cannon that he designed to

fire it, found ready markets in Egypt, Israel, Holland, Italy, Britain,

Canada, Venezuela, Chile, Thailand, Iran, South Africa, Austria and

Somalia.

Dr. Bull created a strange private reserve on the Canadian-

American border; a private arms manufactury with its own US and

Canadian customs units. This arrangement was very useful, since the

arms-export laws of the two countries differed, and SRC's military

products could be shipped-out over either national border at will. In

this distant enclave on the rural northern border of Vermont, the

arms genius built his own artillery range, his own telemetry towers

and launch-control buildings, his own radar tracking station,

workshops, and machine shops. At its height, the Space Research

Corporation employed over three hundred people at this site, and

boasted some $15 million worth of advanced equipment.

The downfall of HARP had left Bull disgusted with the

government-supported military-scientific establishment. He referred

to government researchers as "clowns" and "cocktail scientists," and

decided that his own future must lay in the vigorous world of free

enterprise. Instead of exploring the upper atmosphere, Bull

dedicated his ready intelligence to the refining of lethal munitions.

Bull would not sell to the Soviets or their client states, whom he

loathed; but he would sell to most anyone else. Bull's cannon are

credited with being of great help to Jonas Savimbi's UNITA war in

Angola; they were also extensively used by both sides in the Iran-Iraq

war.

Dr. Gerald V. Bull, Space Researcher, had become a

professional arms dealer. Dr. Bull was not a stellar success as an

arms dealer, because by all accounts he had no real head for business.

Like many engineers, Bull was obsessed not by entrepreneurial drive,

but by the exhilirating lure of technical achievement. The

atmosphere at Space Research Corporation was, by all accounts, very

collegial; Bull as professor, employees as cherished grad-students.

Bull's employees were fiercely loyal to him and felt that he was

brilliantly gifted and could accomplish anything.

SRC was never as great a commercial success as Bull's

technical genius merited. Bull stumbled badly in 1980. The Carter

Administration, annoyed by Bull's extensive deals with the South

African military, put Bull in prison for customs violation. This

punishment, rather than bringing Bull "to his senses," affected him

traumatically. He felt strongly that he had been singled out as a

political scapegoat to satisfy the hypocritical, left-leaning, anti-

apartheid bureaucrats in Washington. Bull spent seven months in an

American prison, reading extensively, and, incidentally, successfully

re-designing the prison's heating-plant. Nevertheless, the prison

experience left Bull embittered and cynical. While still in prison, Bull

was already accepting commercial approaches from the Communist

Chinese, who proved to be among his most avid customers.

After his American prison sentence ended, Bull abandoned his

strange enclave in the US-Canadian border to work full-time in

Brussels, Belgium. Space Research Corporation was welcomed there,

in Europe's foremost nexus of the global arms trade, a city where

almost anything goes in the way of merchandising war.

In November 1987, Bull was politely contacted in Brussels by the

Iraqi Embassy, and offered an all-expenses paid trip to Bagdad.

From 1980 to 1989, during their prolonged, lethal, and highly

inconclusive war with Iran, Saddam Hussein's regime had spent some

eighty billion dollars on weapons and weapons systems. Saddam

Hussein was especially fond of his Soviet-supplied "Scud" missiles,

which had shaken Iranian morale severely when fired into civilian

centers during the so-called "War of the Cities." To Saddam's mind,

the major trouble with his Scuds was their limited range and accuracy,

and he had invested great effort in gathering the tools and manpower

to improve the Iraqi art of rocketry.

The Iraqis had already bought many of Bull's 155-millimeter

cannon from the South Africans and the Austrians, and they were

most impressed. Thanks to Bull's design genius, the Iraqis actually

owned better, more accurate, and longer-range artillery than the

United States Army did.

Bull did not want to go to jail again, and was reluctant to break

the official embargo on arms shipments to Iraq. He told his would-be

sponsors so, in Bagdad, and the Iraqis were considerate of their

guest's qualms. To Bull's great joy, they took his idea of a peaceful

space cannon very seriously. "Think of the prestige," Bull suggested to

the Iraqi Minister of Industry, and the thought clearly intrigued the

Iraqi official.

The Israelis, in September 1988, had successfully launched their

own Shavit rocket into orbit, an event that had much impressed, and

depressed, the Arab League. Bull promised the Iraqis a launch system

that could place dozens, perhaps hundreds, of Arab satellites into

orbit. *Small* satellites, granted, and unmanned ones; but their

launches would cost as little as five thousand dollars each. Iraq

would become a genuine space power; a minor one by superpower

standards, but the only Arab space power.

And even small satellites were not just for show. Even a minor

space satellite could successfully perform certain surveillance

activities. The American military had proved the usefulness of spy

satellites to Saddam Hussein by passing him spysat intelligence during

worst heat of the Iran-Iraq war.

The Iraqis felt they would gain a great deal of widely

applicable, widely useful scientific knowledge from their association

with Bull, whether his work was "peaceful" or not. After all, it was

through peaceful research on Project HARP that Bull himself had

learned techniques that he had later sold for profit on the arms

market. The design of a civilian nose-cone, aiming for the stars, is

very little different from that of one descending with a supersonic

screech upon sleeping civilians in London.

For the first time in his life, Bull found himself the respected

client of a generous patron with vast resources -- and with an

imagination of a grandeur to match his own. By 1989, the Iraqis were

paying Bull and his company five million dollars a year to redesign

their field artillery, with much greater sums in the wings for "Project

Babylon" -- the Iraqi space-cannon. Bull had the run of ominous

weapons bunkers like the "Saad 16" missile-testing complex in north

Iraq, built under contract by Germans, and stuffed with gray-market

high-tech equipment from Tektronix, Scientific Atlanta and Hewlett-

Packard.

Project Babylon was Bull's grandest vision, now almost within

his grasp. The Iraqi space-launcher was to have a barrel five hundred

feet long, and would weigh 2,100 tons. It would be supported by a

gigantic concrete tower with four recoil mechanisms, these shock-

absorbers weighing sixty tons each. The vast, segmented cannon

would fire rocket-assisted projectiles the size of a phone booth, into

orbit around the Earth.

In August 1989, a smaller prototype, the so-called "Baby

Babylon," was constructed at a secret site in Jabal Hamrayn, in central

Iraq. "Baby Babylon" could not have put payloads into orbit, but it

would have had an international, perhaps intercontinental range.

The prototype blew up on its first test-firing.

The Iraqis continued undaunted on another prototype super-

gun, but their smuggling attempts were clumsy. Bull himself had little

luck in maintaining the proper discretion for a professional arms

dealer, as his own jailing had proved. When flattered, Bull talked;

and when he talked, he boasted.

Word began to leak out within the so-called "intelligence

community" that Bull was involved in something big; something to do

with Iraq and with missiles. Word also reached the Israelis, who were

very aware of Bull's scientific gifts, having dealt with him themselves,

extensively.

The Iraqi space cannon would have been nearly useless as a

conventional weapon. Five hundred feet long and completely

immobile, it would have been easy prey for any Israeli F-15. It would

have been impossible to hide, for any launch would thrown a column

of flame hundreds of feet into the air, a blazing signal for any spy

satellite or surveillance aircraft. The Babylon space cannon, faced

with determined enemies, could have been destroyed after a single

launch.

However, that single launch might well have served to dump a

load of nerve gas, or a nuclear bomb, onto any capital in the world.

Bull wanted Project Babylon to be entirely peaceful; despite his

rationalizations, he was never entirely at ease with military projects.

What Bull truly wanted from his Project Babylon was *prestige.* He

wanted the entire world to know that he, Jerry Bull, had created a

working space program, more or less all by himself. He had never

forgotten what it meant to world opinion to hear the Sputnik beeping

overhead.

For Saddam Hussein, Project Babylon was more than any

merely military weapon: it was a *political* weapon. The prestige

Iraq might gain from the success of such a visionary leap was worth

any number of mere cannon-fodder batallions. It was Hussein's

ambition to lead the Arab world; Bull's cannon was to be a symbol of

Iraqi national potency, a symbol that the long war with the Shi'ite

mullahs had not destroyed Saddam's ambitions for transcendant

greatness.

The Israelis, however, had already proven their willingness to

thwart Saddam Hussein's ambitions by whatever means necessary.

In 1981, they had bombed his Osirak nuclear reactor into rubble. In

1980, a Mossad hit-team had cut the throat of Iraqi nuclear scientist

Yayha El Meshad, in a Paris hotel room.

On March 22, 1990, Dr. Bull was surprised at the door of his

Brussels apartment. He was shot five times, in the neck and in the

back of the head, with a silenced 7.65 millimeter automatic pistol.

His assassin has never been found.

FOR FURTHER READING:

ARMS AND THE MAN: Dr. Gerald Bull, Iraq, and the Supergun by

William Lowther (McClelland- Bantam, Inc., Toronto, 1991)

BULL'S EYE: The Assassination and Life of Supergun Inventor

Gerald Bull by James Adams (Times Books, New York, 1992)

From SF EYE #11 Dec 1992

Science Fiction Eye, P. O. Box 18539, Asheville NC 28814 (USA$10.yr $15 global)

SF EYE CATSCAN #11:

"Sneaking For Jesus 2001"

Conspiracy fiction. I've come across a pair of especially remarkable works in this odd subgenre lately. Paul Di Filippo's treatment of the conspiracy subgenre, " My Brain Feels Like A Bomb" in SF EYE 8, collected some fine, colorful specimens. Di Filippo theorizes that the conspiracy subgenre, anchored at its high end by GRAVITY'S RAINBOW and FOUCAULT'S PENDULUM and at its low end by quite a lot of cheesy sci-fi and gooofy spy thrillers, is unique to the twentieth-century, and bred by our modern (postmodern?) inability to make sense of an overwhelming flow of high-velocity information.

This may be true. I'm not inclined to challenge that sociological assessment, and can even offer some backup evidence. Where is that postmodern flow of information more intense, and less basically comprehensible, than in the world of computing? Thus is bred the interesting sub-subgenre of computer paranoia fiction - hacker conspiracy! I now propose to examine two such works: the movie (and book) SNEAKERS, and the novel (and prophesy?) THE ILLUMINATI.

Let's take the second item first, as it's much the more remarkable of the two. The ILLUMINATI in question today has nothing to do with the Robert Anton Wilson ILLUMINATI series; in fact, its weltanschauung is utterly at odds with Wilson's books. Wilson's paranoid yarn is basically a long, rambling, crypto-erudite hipster rap-session, but Larry Burkett's ILLUMINATI is a fictional work of evangelical Christian exegesis, in which lesbians, leftists, dope addicts and other tools of Satan establish a gigantic government computer network in the year 2001, with which to exterminate all Southern Baptists.

I recommend this novel highly. Larry Burkett's ILLUMINATI has already sold some 100,000 copies through Christian bookstores, and it seems to me to have tremendous crossover potential for hundreds of chuckling cyberpunk cynics. To my eye it's a lot more mind-blowing than any of Wilson's books.

The Robert Anton Wilson oeuvre is perenially in print in New Age bookstores, and quite well known in the SF category racks. Therefore the CATSCAN reader may already be aware that the so-called "Illuminati" were a freethinking secret society purportedly founded in the 1770s, who had something to do with Freemasonry and were opposed to established Church authority in Europe.

So far, so good. It's not surprising that a with-it hipster dude like R.A. Wilson would use the historical Illuminati as a head-trip springboard to mock All Things Establishment. The far more surprising matter is that some evangelical Christians, such as the Reverend Pat Robertson, not only take the 217-year-old and extremely dead Illuminati seriously, but are also currently dominating the social agenda of the Republican Party. Reverend Robertson's latest "non-fiction" tome, THE NEW WORLD ORDER, is chock-a-block with straightfaced and utterly paranoiac Illuminati-under-the-bed terrormongering. Robertson publicly credits the "satanic" Illuminati conspiracy with direct authorship of the French Revolution and the Bolshevik uprising, as well as sponsorship of the Trilateral Commission and the comsymp "Eastern Establishment" generally. The good Reverend also expresses the gravest possible reservations about the occult Masonic insignia on the back of the one-dollar bill.

George Bush himself, best-known public advocate of a "New World Order," is cast under suspicion in Robertson's work as an Illuminati tool, and yet Bush gave his accuser prime-time TV in his party's National Convention. One can only marvel!

As a comparative reality-check, try and imagine Robert Anton Wilson delivering his Hail Eris rap at a Democratic Party Convention (while the audience, nodding on national television, listens in sober respect and acts really glad to be clued-in). Odd enough for you? Now imagine ontological anarchists re-writing the Democratic Party platform on abortion, sexual behavior, and federal sponsorship of the arts.

Larry Burkett has taken this way-out sectarian extremist theo-gibberish and made it into a techno-thriller! The result is a true mutant among novels. How many science fiction novels begin with a disclaimer like this one?

"My biggest concern in writing a novel is that someone may read too much into it. Obviously, I tried to use as realistic a scenario as possible in the story. But it is purely fictional, including the characters, events, and timing. It should not be assumed that it is prophetic in any regard. As best I know, I have a gift for teaching, a talent for writing, and no prophetic abilities beyond that of any other Christian."

I was so impressed by this remarkable disclaimer of Mr Burkett's that I tracked down his address (using the CompuServe computer network) and I succeeded in interviewing him by phone for this column. I learned that Mr Burkett has received some six thousand letters about his novel ILLUMINATI from eager readers, many of them previously aware of the Illuminati menace and eager to learn yet more. And yes, many of those readers do believe that the Mr. Burkett novel is an inspired prophecy, despite his disclaimer, and they demand his advice on how to shelter themselves from the secret masters of the coming Satanic computer-cataclysm.

Even more remarkably, a dozen correspondents claimed to have once been Illuminati themselves, and they congratulated Mr. Burkett on his insights into their conspiracy! Mr. Burkett described this last category as featuring "three or four letters that were fairly lucid."

Mr. Burkett himself seems quite lucid. He was clearly "having some fun" with notions he considers serious but not all *that* serious, and in this he is not much different from many other SF authors with active imaginations and vaguely politicized concerns. Now a financial consultant, Mr. Burkett was once a NASA project manager, and dealt with early mainframe systems for the Gemini and Mercury missions. As a father, grandfather, best-selling author and head of a successful investment-counseling firm, Mr. Burkett seemed to me to have at least as firm a grip on consensus reality as say, Ross Perot. In talking to Mr Burkett I found him a calm, open and congenial gentleman.

However, Mr. Burkett is also a committed "dispensational Christian" and he believes sincerely that abortion is an act of murder. He is therefore living in a basically nightmarish society in which hundreds of thousands of innocent human beings are gruesomely murdered through no fault of their own. I believe that Mr. Burkett considers abortion so great an evil that it could not possibly have been inflicted on our society by any merely human agency. It can only be understood as part of an ancient, multi-generational conspiracy, planned and carried out by the immortal and evil Adversary of Mankind through his mortal cats-paws on Earth.

From the pyramid-eye point of view of this belief-system, it makes good tub-thumping common-sense to assume that "Secular Humanism" is a single monolithic entity -- even if its own useful-idiot liberal dupes seem more-or-less unaware of their own true roles in Satan's master-plan.

All enemies are agents willy-nilly of The Enemy, and their plans run toward a single end: the establishment of Satan's Kingdom on Earth. In the words of Reverend Robertson (NEW WORLD ORDER p 6): "A single thread runs form the White House to the State Department to the Council on Foreign Relations to the Trilateral Commission to secret societies to extreme New Agers. There must be a new world order. It must eliminate national sovereignty. There must be world government, a world police force, world courts, world banking and currency, and a world elite in charge of it all."

Of course, if you are going to string all important global events onto "a single thread," you are going to end up with an extremely variegated necklace. When you formally assemble the whole farrago into the pages of a thriller-novel, as Mr. Burkett does, the result is like Lovecraft on laughing-gas. Mr. Burkett's fictional technique owes far more to his favorite authors, Tom Clancy and Michael Crichton, than it does to any genre SF writer. Mr Burkett is not himself an SF reader. Nevertheless, his material itself is so inherently over-the-top that his book resembles the Call of Cthulhu far more than a hunt for Red October.

The pace is whiplash-fast and the set-up entirely mindboggling. In the year 2001, the President, an Illuminati puppet "liberal," stages a coup against Congress in the midst of economic collapse and massive urban riots. The Mossad are bugging the White House and building a cobalt super-bomb with the Red Chinese. We learn that the Illuminati began as Druids and transmuted into Freemasons; the wily Jews, of course, have known all about the Illuminati for centuries, though never bothering to inform us goyim. The gay Governor of California is a feminist church-taxing coke addict. The "liberal" President sells "brain-dead" crack babies to fetal-tissue medical entrepreneurs. Meanwhile, evil liberal civil-libertarians tattoo everyone's right hand with the scanner-code of the Beast 666. It just goes on and on!

The yummiest item in the whole stew, however, is the identity of the book's hero, one Jeff Wells. Jeff's a computer hacker. A genius hacker for Christ. Somewhat against his will and entirely without any evil intent, Jeff was recruited to design and build the gigantic Data-Net financial network, which the Illuminati secular one-worlders then use to consolidate power, and to pursue and harass innocent Christian activists. When Jeff discovers that the feds are using his handiwork to round up Baptists and ship them by the trainload to dismal gulags in Arizona, he drops out of the system, goes deep underground, and joins the Christian revolutionary right.

With the moral guidance of a saintly televangelist, Jeff, using his powerful and extremely illegal computer-intrusion skills, simply chops up Data-Net like a cook deboning a chicken. In defence of his Savior, Jeff basically overthrows the US Government by digital force and violence. He defrauds the government of billions of dollars. He creates thousands of false identities. He deliberately snarls train traffic and airport traffic. He spies on high government officials, tracking their every move. The Pentagon, the Secret Service and the FBI are all rendered into helpless fools through Jeff's skillful tapping of a keyboard. It's like a Smash-the-State Yippie phone-phreak's wet-dream -- and yet it's all done in defense of family-values.

One shuts Mr. Burkett's book regretfully and with a skull-tingling sensation of genuine mind-expansion.

But let's now leave ILLUMINATI for a look at somewhat more actual and far more commercially successful Yippie phone-phreak wet-dream, the film (and novel) SNEAKERS. As it happens, the movie tie-in novel SNEAKERS (by one "Dewey Gram," a name that sounds rather suspicious) is somewhat uninspired and pedestrian (especially in comparison to ILLUMINATI). The book has a slightly more graphic sexual-voyeur sequence than the movie does, and some mildly interesting additional background about the characters. The SNEAKERS novel seems to have been cooked-up from an earlier screenplay than the shooting-script. You won't miss much by skipping it entirely.

The sinister Liberal Cultural Elite (and their vile Illuminati puppet-masters) must take great satisfaction in comparing the audience for a Hollywood blockbuster like SNEAKERS with the relatively tiny readership for the eager though amateurish ILLUMINATI. ILLUMINATI was written in eight weeks flat, and will have a devil of a time reaching anybody outside an evangelical chain-store. SNEAKERS, by contrast, cost millions to make, and has glossy posters, promo lapel buttons, pre-release screenings, TV ads, and a video release on the way, not to mention its own book tie-in.

SNEAKERS will also be watched with a straight face and genuine enjoyment by millions of Americans, despite its "radical" attitude and its open sympathies with 60s New Leftist activism. ILLUMINATI will have no such luck. Even after twelve solid years of Reaganism, in which the federal government was essentially run by panic-stricken astrologers and the Republican Party kowtowed utterly to its fringe-nut element, it's still unthinkable that a work like ILLUMINATI could become a mainline Hollywood film. Even as a work of science fiction, ILLUMINATI would simply be laughed off the screen by the public. Even R. A. Wilson's ILLUMINATI would have a better chance at production. Margaret Atwood's HANDMAID'S TALE, which promotes anti-network paranoia from a decidedly leftist/feminist perspective, actually made it to the screen! The Burkett ILLUMINATI's theocratic nuttiness is simply too ludicrous.

SNEAKERS is a professional piece of Hollywood entertainment and a pleasant movie to watch. I'm not one of those who feels that Hollywood movies should be required to teach moral lessons, or to heighten public taste, even to make basic sense. Hey, let Hollywood be Hollywood: SNEAKERS has some nice production values, a solid cast, some thrills and some laughs; money spent seeing it is money well spent.

And yet there's a lot to dislike about SNEAKERS anyhow. The entire effort has a depressing insincerity, and a profound sense of desperation and defeat that it tries to offset with an annoying nervous-tic mockery. The problem resides in the very nature of the characters and their milieu. It's certainly an above-average cast, with Sidney Poitier, Robert Redford, Dan Aykroyd and River Phoenix, who are as professionally endearing and charismatic as they can manage. Yet almost everything these characters actually do is deceitful, repulsive, or basically beside the point; they seem powerless, hopeless, and robbed of their own identities, robbed of legitimacy, even robbed of their very lives.

SNEAKERS is remarkable for its fidelity to the ethos of the computer underground. It's something of a love-note to the 2600 crowd (who seem properly appreciative). System-cracker practices like trashing, canning, and social-engineering are faithfully portrayed. And while SNEAKERS is remarkably paranoid, that too rather suits its own milieu, because many underground hackers are in fact remarkably paranoid, especially about the NSA, other techie feds, and their fellow hackers.

Hacking complex computer systems from the outside -- maintaining a toehold within machinery that doesn't belong to you and is not obedient to your own purposes -- tends by its nature to lead to a rather fragmentary understanding. This fragmentary knowledge, combined with guilty fear, is a perfect psychological breeding-ground for a deeply paranoid outlook. Knowledge underground takes the form of a hipster's argot, rules of thumb, and superstitious ritual, combined with large amounts of practised deceit. And that's the way the SNEAKERS cast basically spend their lives: in pretense and deception, profoundly disenchanted and utterly disenfranchised. Basically, not one person among them can be trusted with a burnt-out match. Even their "robberies" are fakes; they lie even to one another, and they risk their lives, and other people's, for peanuts.

SNEAKERS, in which anagrams play a large thematic role, is itself an anagram for NSA REEKS. The National Security Agency is the largest target for the vaguely-leftist, antiauthoritarian paranoia expressed by the film. The film's sinister McGuffin is an NSA-built super-decryptor device. (This super-decryptor is a somewhat silly gimmick, but that shouldn't be allowed to spoil the story. Real cryptography enthusiasts will probably be too busy laughing at the decryptor's mad-genius inventor, a raunchy parody of real-life cryptographer Whitfield Diffie.) The IRS, though never mentioned overtly, also comes in for some tangential attack, since the phone number of one of the IRS's California offices is given out verbally during the film by an attractive young woman, who claims that it's her home phone number. This deliberate bit of mischief must have guaranteed the IRS a lot of eager phone-phreak action.

Every conspiracy must have a Them. In the black-and-white world of ILLUMINATI, all forms of opposition to Goodness must be cut from the same Satanic cloth, so that Aleister Crowley, Vladimir Lenin and David Rockefeller are all of one warp and woof. SNEAKERS, by contrast, is slightly more advanced, and features two distinct species of Them. The first Them is the Hippie-Sold-Out Them, a goofy role gamely played by Ben Kingsley as a Darkside Yuppie Hacker Mafioso, a kind of carnivorous forty-something Bill Gates. The second species of Them is the enonymously reeking NSA, the American shadow-spook elite, surprisingly personified by a patriarchal James Earl Jones in an oddly comic and comforting Wizard of Oz-like cameo.

Both these Thems are successfully fooled by the clever Sneakers in bits of Hollywood business that basically wouldn't deceive a bright five-year-old, much less the world's foremost technical espionage agency and a security-mad criminal zillionaire.

But these plot flaws are no real objection. A more genuine objection would be the entire tenor of the film. The film basically accomplishes nothing. Nothing actually happens. No one has to change their mind about anything. At the end, the Hacker Mafioso is left at large, still in power, still psychotic, and still in command of huge sums and vast archives of illicit knowledge and skill. The NSA, distributing a few cheap bribes, simply swears everybody to secrecy, and retreats safely back into the utter undisturbed silence of its Cold War netherworld. A few large issues are raised tangentially, but absolutely nothing is done about them, and no moral judgements or decisions are made. The frenetic plotting of the Sneaker team accomplishes nothing whatsoever beyond a maintenance of the status quo and the winning of a few toys for the personnel. Redford doesn't even win the token girl. It seems much ado about desperately little.

Then, at the very end, our hero robs the Republican Party of all its money through computer-fraud, and distributes it to worthy left-wing causes. Here something has actually happened at last, but it's a dismal and stupid thing. It's profoundly undemocratic, elitist, and hateful act; only a political idiot could imagine that a crime of this nature would do a minute's worth of real good. And even this psychotic provocation has the look of a last-minute tag-on to the movie; in the book, it doesn't even occur.

The film makes two stabs at Big Message. There's a deliberate and much-emphasized Lecture at the Foot of the Cray, where the evil Darkside Hacker explains in slow and careful capital letters that the world in the 90s has become an Information Society and has thus become vulnerable to new and suspiciously invisible forms of manipulation. Beyond a momentary spasm of purely intellectual interest, though, our hero's basic response is a simple "I know. And I don't care." This surprisingly sensible remark much deflates the impact of the superhacker-paranoia scenario.

The second Big Message occurs during a ridiculously convenient escape-scene in which our hero defies the Darkside Hacker to kill him face-to-face. The bad-guy, forced to look deep inside his own tortured soul, can't endure the moral responsibility involved in pulling a trigger personally. The clear implication is that sooner or later somebody has to take a definite and personal responsibility for all this abstract technologized evil. Unfortunately this is sheer romantic hippie nonsense; even Adolf Eichmann has it figured that it was all somebody else's fault. The twentieth century's big-time evils consisted of people pushing papers in a distant office causing other people to die miles away at the hands of dazed functionaries. Tomorrow's button-pushers are likely to be more remote and insulated than ever; they're not going to be worrying much about their cop-outs and their karma.

SNEAKERS plays paranoia for slapstick laughs in the character of Dan Aykroyd, who utters a wide variety of the standard Space-Brother nutty notions, none of them with any practical implications whatsoever. This may be the worst and most discouraging aspect of the conspiratorial mindset -- the way it simultaneously flatters one's own importance and also makes one willing to do nothing practical and tangible. The conspiracy theorist has got it all figured, he's got the inside angles, and yet he has the perfect excuse for utter cynical torpor.

Let's just consider the real-world implications of genuine conspiratorial convictions for a moment. Let's assume, as many people do, that John Kennedy really was shot dead in a 'silent coup' by a US government cabal in 1963. If this is true, then we Americans clearly haven't run our own national affairs for at least thirty years. Our executive, our Congress, our police and our bureaucracies have all been a fraud in the hands of elite and murderous secret masters. But if we're not running our own affairs today, and haven't for thirty years, then how the heck are we supposed to start now? Why even try? If the world's fate is ineluctably in the hands of Illuminati, then what real reason do we have to meddle in public matters? Why make our thoughts and ideas heard? Why organize, why discuss public policy, why make budgets, why set priorities, why vote? We'll just get gypped anyhow. We'd all be better off retired, in hiding, underground, in monasteries, in purdah, or dead.

If the NSA's tapping every phone line and reading every license-plate from orbit, then They are basically omniscient. They're watching us every moment -- but why do they bother? What quality, besides our own vanity, would make us important enough to be constantly watched by Secret Masters? After all, it's not like we're actually intending to *accomplish* anything.

Conspiracy is for losers. As conspiracy freaks, by our very nature we'll always live on the outside of where it's Really Happening. That's what justifies our existence and allows us to tell Ourselves apart from Them. Unlike people in the former Eastern Bloc, who actually were oppressed and monitored by a sinister power-elite, we ourselves will never *become* what's Really Happening, despite our enormous relative advantages. Maybe we can speculate a little together, trade gossip, scare each other silly and swap outlandish bullshit. We can gather up our hacker scrapbooks from the office trash of the Important and Powerful. We can press our noses to the big mirrorglass windows. Maybe it we're especially daring, we can fling a brick through a window late one night and run like hell. That'll prove that we're brave and that we really don't like Them -- though we're not brave enough to replace Them, and we're certainly not brave enough to become Them.

And this would also prove that no sane person would ever trust us with a scintilla of real responsibility or power anyway, over ourselves or anyone else. Because we don't deserve any such power, no matter from what angle of the political spectrum we happen to emerge. Because we've allowed ourselves the ugly luxury of wallowing in an enormous noisome heap of bullshit. And for being so stupid, we deserve whatever we get.

ARTIFICIAL LIFE

From THE MAGAZINE OF FANTASY AND SCIENCE FICTION,

Dec 1992.

F&SF Science column #4

The new scientific field of study called "Artificial Life" can be defined as "the attempt to abstract the logical form of life from its material manifestation."

So far, so good. But what is life?

The basic thesis of "Artificial Life" is that "life" is best understood as a complex systematic process. "Life" consists of relationships and rules and interactions. "Life" as a property is potentially separate from actual living creatures.

Living creatures (as we know them today, that is) are basically made of wet organic substances: blood and bone, sap and cellulose, chitin and ichor. A living creature -- a kitten, for instance -- is a physical object that is made of molecules and occupies space and has mass.

A kitten is indisputably "alive" -- but not because it has the "breath of life" or the "vital impulse" somehow lodged inside its body. We may think and talk and act as if the kitten "lives" because it has a mysterious "cat spirit" animating its physical cat flesh. If we were superstitious, we might even imagine that a healthy young cat had *nine* lives. People have talked and acted just this way for millennia.

But from the point-of-view of Artificial Life studies, this is a very halting and primitive way of conceptualizing what's actually going on with a living cat. A kitten's "life" is a *process, * with properties like reproduction, genetic variation, heredity, behavior, learning, the possession of a genetic program, the expression of that program through a physical body. "Life" is a thing that *does,* not a thing that *is* -- life extracts energy from the environment, grows, repairs damage, reproduces.

And this network of processes called "Life" can be picked apart, and studied, and mathematically modelled, and simulated with computers, and experimented upon -- outside of any creature's living body.

"Artificial Life" is a very young field of study. The use of this term dates back only to 1987, when it was used to describe a conference in Los Alamos New Mexico on "the synthesis and simulation of living systems." Artificial Life as a discipline is saturated by computer-modelling, computer-science, and cybernetics.

It's conceptually similar to the earlier field of study called "Artificial Intelligence." Artificial Intelligence hoped to extract the basic logical structure of intelligence, to make computers "think." Artificial Life, by contrast, hopes to make computers only about as "smart" as an ant -- but as "alive" as a swarming anthill.

Artificial Life as a discipline uses the computer as its primary scientific instrument. Like telescopes and microscopes before them, computers are making previously invisible aspects of the world apparent to the human eye. Computers today are shedding light on the activity of complex systems, on new physical principles such as "emergent behavior," "chaos," and "self-organization."

For millennia, "Life" has been one of the greatest of metaphysical and scientific mysteries, but now a few novel and tentative computerized probes have been stuck into the fog. The results have already proved highly intriguing.

Can a computer or a robot be alive? Can an entity which only exists as a digital simulation be "alive"? If it looks like a duck, quacks like a duck, waddles like a duck, but it in fact takes the form of pixels on a supercomputer screen -- is it a duck? And if it's not a duck, then what on earth is it? What exactly does a thing have to do and be before we say it's "alive"?

It's surprisingly difficult to decide when something is "alive." There's never been a definition of "life," whether scientific, metaphysical, or theological, that has ever really worked. Life is not a clean either/or proposition. Life comes on a kind of scale, apparently, a kind of continuum -- maybe even, potentially, *several different kinds of continuum.*

One might take a pragmatic, laundry-list approach to defining life. To be "living," a thing must grow. Move. Reproduce. React to its environment. Take in energy, excrete waste. Nourish itself, die, and decay. Have a genetic code, perhaps, or be the result of a process of evolution. But there are grave problems with all of these concepts. All these things can be done today by machines or programs. And the concepts themselves are weak and subject to contradiction and paradox.

Are viruses "alive"? Viruses can thrive and reproduce, but not by themselves -- they have to use a victim cell in order to manufacture copies of themselves. Some dormant viruses can crystallize into a kind of organic slag that's dead for all practical purposes, and can stay that way indefinitely -- until the virus gets another chance at infection, and then the virus comes seething back.

How about a frozen human embryo? It can be just as dormant as a dormant virus, and certainly can't survive without a host, but it can become a living human being. Some people who were once frozen embryos may be reading this magazine right now! Is a frozen embryo "alive" -- or is it just the *potential* for life, a genetic life-program halted in mid-execution?

Bacteria are simple, as living things go. Most people however would agree that germs are "alive." But there are many other entities in our world today that act in lifelike fashion and are easily as complex as germs, and yet we don't call them "alive" - except "metaphorically" (whatever *that* means).

How about a national government, for instance? A government can grow and adapt and evolve. It's certainly a very powerful entity that consumes resources and affects its environment and uses enormous amounts of information. When people say "Long Live France," what do they mean by that? Is the Soviet Union now "dead"?

Amoebas aren't "mortal" and don't age -- they just go right on splitting in half indefinitely. Does that mean that all amoebas are actually pieces of one super-amoeba that's three billion years old?

And where's the "life" in an ant-swarm? Most ants in a swarm never reproduce; they're sterile workers -- tools, peripherals, hardware. All the individual ants in a nest, even the queen, can die off one by one, but as long as new ants and new queens take their place, the swarm itself can go on "living" for years without a hitch or a stutter.

Questioning "life" in this way may seem so much nit-picking and verbal sophistry. After all, one may think, people can easily tell the difference between something living and dead just by having a good long look at it. And in point of fact, this seems to be the single strongest suit of "Artificial Life." It is very hard to look at a good Artificial Life program in action without perceiving it as, somehow, "alive."

Only living creatures perform the behavior known as "flocking." A gigantic wheeling flock of cranes or flamingos is one of the most impressive sights that the living world has to offer.

But the "logical form" of flocking can be abstracted from its "material manifestation" in a flocking group of actual living birds. "Flocking" can be turned into rules implemented on a computer. The rules look like this:

1. Stay with the flock -- try to move toward where it seems thickest.

2. Try to move at the same speed as the other local birds.

3. Don't bump into things, especially the ground or other birds.

In 1987, Craig Reynolds, who works for a computer-graphics company called Symbolics, implemented these rules for abstract graphic entities called "bird-oids" or "boids." After a bit of fine-tuning, the result was, and is, uncannily realistic. The darn things *flock!*

They meander around in an unmistakeably lifelike, lively, organic fashion. There's nothing "mechanical" or "programmed-looking" about their actions. They bumble and swarm. The boids in the middle shimmy along contentedly, and the ones on the fringes tag along anxiously jockeying for position, and the whole squadron hangs together, and wheels and swoops and maneuvers, with amazing grace. (Actually they're neither "anxious" nor "contented," but when you see the boids behaving in this lifelike fashion, you can scarcely help but project lifelike motives and intentions onto them.)

You might say that the boids simulate flocking perfectly - but according to the hard-dogma position of A-Life enthusiasts, it's not "simulation" at all. This is real "flocking" pure and simple -- this is exactly what birds actually do. Flocking is flocking -- it doesn't matter if it's done by a whooping crane or a little computer-sprite.

Clearly the birdoids themselves aren't "alive" -- but it can be argued, and is argued, that they're actually doing something that is a genuine piece of the life process. In the words of scientist Christopher Langton, perhaps the premier guru of A-Life: "The most important thing to remember about A-Life is that the part that is artificial is not the life, but the materials. Real things happen. We observe real phenomena. It is real life in an artificial medium."

The great thing about studying flocking with boids, as opposed to say whooping cranes, is that the Artificial Life version can be experimented upon, in controlled and repeatable conditions. Instead of just *observing* flocking, a life-scientist can now *do* flocking. And not just flocks -- with a change in the parameters, you can study "schooling" and "herding" as well.

The great hope of Artificial Life studies is that Artificial Life will reveal previously unknown principles that directly govern life itself -- the principles that give life its mysterious complexity and power, its seeming ability to defy probability and entropy. Some of these principles, while still tentative, are hotly discussed in the field.

For instance: the principle of *bottom-up* initiative rather than *top-down* orders. Flocking demonstrates this principle well. Flamingos do not have blueprints. There is no squadron-leader flamingo barking orders to all the other flamingos. Each flamingo makes up its own mind. The extremely complex motion of a flock of flamingos arises naturally from the interactions of hundreds of independent birds. "Flocking" consists of many thousands of simple actions and simple decisions, all repeated again and again, each action and decision affecting the next in sequence, in an endless systematic feedback.

This involves a second A-Life principle: *local* control rather than *global* control. Each flamingo has only a vague notion of the behavior of the flock as a whole. A flamingo simply isn't smart enough to keep track of the entire "big picture," and in fact this isn't even necessary. It's only necessary to avoid bumping the guys right at your wingtips; you can safely ignore the rest.

Another principle: *simple* rules rather than *complex* ones. The complexity of flocking, while real, takes place entirely outside of the flamingo's brain. The individual flamingo has no mental conception of the vast impressive aerial ballet in which it happens to be taking part. The flamingo makes only simple decisions; it is never required to make complex decisions requiring a lot of memory or planning. *Simple* rules allow creatures as downright stupid as fish to get on with the job at hand -- not only successfully, but swiftly and gracefully.

And then there is the most important A-Life principle, also perhaps the foggiest and most scientifically controversial: *emergent* rather than *prespecified* behavior. Flamingos fly from their roosts to their feeding grounds, day after day, year in year out. But they will never fly there exactly the same way twice. They'll get there all right, predictable as gravity; but the actual shape and structure of the flock will be whipped up from scratch every time. Their flying order is not memorized, they don't have numbered places in line, or appointed posts, or maneuver orders. Their orderly behavior simply *emerges,* different each time, in a ceaselessly varying shuffle.

Ants don't have blueprints either. Ants have become the totem animals of Artificial Life. Ants are so 'smart' that they have vastly complex societies with actual *institutions* like slavery and agriculture and aphid husbandry. But an individual ant is a profoundly stupid creature. Entomologists estimate that individual ants have only fifteen to forty things that they can actually "do." But if they do these things at the right time, to the right stimulus, and change from doing one thing to another when the proper trigger comes along, then ants as a group can work wonders.

There are anthills all over the world. They all work, but they're all different; no two anthills are identical. That's because they're built bottom-up and emergently. Anthills are built without any spark of planning or intelligence. An ant may feel the vague instinctive need to wall out the sunlight. It begins picking up bits of dirt and laying them down at random. Other ants see the first ant at work and join in; this is the A-Life principle known as "allelomimesis," imitating the others (or rather not so much "imitating" them as falling mechanically into the same instinctive pattern of behavior).

Sooner or later, a few bits of dirt happen to pile up together. Now there's a wall. The ant wall-building sub-program kicks into action. When the wall gets high enough, it's roofed over with dirt and spit. Now there's a tunnel. Do it again and again and again, and the structure can grow seven feet high, and be of such fantastic complexity that to draw it on an architect's table would take years.

This emergent structure, "order out of chaos," "something out of nothing" -- appears to be one of the basic "secrets of life."

These principles crop up again and again in the practice of life-simulation. Predator-prey interactions. The effects of parasites and viruses. Dynamics of population and evolution. These principles even seem to apply to internal living processes, like plant growth and the way a bug learns to walk. The list of applications for these principles has gone on and on.

It's not hard to understand that many simple creatures, doing simple actions that affect one another, can easily create a really big mess. The thing that's *hard* to understand is that those same, bottom-up, unplanned, "chaotic" actions can and do create living, working, functional order and system and pattern. The process really must be seen to be believed. And computers are the instruments that have made us see it.

Most any computer will do. Oxford zoologist Richard Dawkins has created a simple, popular Artificial Life program for personal computers. It's called "The Blind Watchmaker," and demonstrates the inherent power of Darwinian evolution to create elaborate pattern and structure. The program accompanies Dr. Dawkins' 1986 book of the same title (quite an interesting book, by the way), but it's also available independently.

The Blind Watchmaker program creates patterns from little black-and-white branching sticks, which develop according to very simple rules. The first time you see them, the little branching sticks seem anything but impressive. They look like this:

Fig 1. Ancestral A-Life Stick-Creature

After a pleasant hour with Blind Watchmaker, I myself produced these very complex forms -- what Dawkins calls "Biomorphs."

Fig. 2 -- Six Dawkins Biomorphs

It's very difficult to look at such biomorphs without interpreting them as critters -- *something* alive-ish, anyway. It seems that the human eye is *trained by nature* to interpret the output of such a process as "life-like." That doesn't mean it *is* life, but there's definitely something *going on there.*

*What* is going on is the subject of much dispute. Is a computer-simulation actually an abstracted part of life? Or is it technological mimicry, or mechanical metaphor, or clever illusion?

We can model thermodynamic equations very well also, but an equation isn't hot, it can't warm us or burn us. A perfect model of heat isn't heat. We know how to model the flow of air on an airplane's wings, but no matter how perfect our simulations are, they don't actually make us fly. A model of motion isn't motion. Maybe "Life" doesn't exist either, without that real-world carbon-and-water incarnation. A-Life people have a term for these carbon-and-water chauvinists. They call them "carbaquists."

Artificial Life maven Rodney Brooks designs insect-like robots at MIT. Using A-Life bottom-up principles -- "fast, cheap, and out of control" -- he is trying to make small multi-legged robots that can behave as deftly as an ant. He and his busy crew of graduate students are having quite a bit of success at it. And Brooks finds the struggle over definitions beside the real point. He envisions a world in which robots as dumb as insects are everywhere; dumb, yes, but agile and successful and pragmatically useful. Brooks says: "If you want to argue if it's living or not, fine. But if it's sitting there existing twenty-four hours a day, three hundred sixty-five days of the year, doing stuff which is tricky to do and doing it well, then I'm going to be happy. And who cares what you call it, right?"

Ontological and epistemological arguments are never easily settled. However, "Artificial Life," whether it fully deserves that term or not, is at least easy to see, and rather easy to get your hands on. "Blind Watchmaker" is the A-Life equivalent of using one's computer as a home microscope and examining pondwater. Best of all, the program costs only twelve bucks! It's cheap and easy to become an amateur A-Life naturalist.

Because of the ubiquity of powerful computers, A-Life is "garage-band science." The technology's out there for almost anyone interested -- it's hacker-science. Much of A-Life practice basically consists of picking up computers, pointing them at something promising, and twiddling with the focus knobs until you see something really gnarly. *Figuring out what you've seen* is the tough part, the "real science"; this is where actual science, reproducible, falsifiable, formal, and rigorous, parts company from the intoxicating glamor of the intellectually sexy. But in the meantime, you have the contagious joy and wonder of just *gazing at the unknown* the primal thrill of discovery and exploration.

A lot has been written already on the subject of Artificial Life. The best and most complete journalistic summary to date is Steven Levy's brand-new book, ARTIFICIAL LIFE: THE QUEST FOR A NEW CREATION (Pantheon Books 1992).

The easiest way for an interested outsider to keep up with this fast-breaking field is to order books, videos, and software from an invaluable catalog: "Computers In Science and Art," from Media Magic. Here you can find the Proceedings of the first and second Artificial Life Conferences, where the field's most influential papers, discussions, speculations and manifestos have seen print.

But learned papers are only part of the A-Life experience. If you can see Artificial Life actually demonstrated, you should seize the opportunity. Computer simulation of such power and sophistication is a truly remarkable historical advent. No previous generation had the opportunity to see such a thing, much less ponder its significance. Media Magic offers videos about cellular automata, virtual ants, flocking, and other A-Life constructs, as well as personal software "pocket worlds" like CA Lab, Sim Ant, and Sim Earth. This very striking catalog is available free from Media Magic, P.O Box 507,

Nicasio CA 94946.

Bruce Sterling

[email protected]

Literary Freeware -- Not for Commercial Use

From THE MAGAZINE OF FANTASY AND SCIENCE FICTION, Feb 1993.

F&SF, Box 56, Cornwall CT 06753 $26/yr USA $31/yr other

F&SF Science Column #5

INTERNET

Some thirty years ago, the RAND Corporation, America's

foremost Cold War think-tank, faced a strange strategic problem. How

could the US authorities successfully communicate after a nuclear

war?

Postnuclear America would need a command-and-control

network, linked from city to city, state to state, base to base. But no

matter how thoroughly that network was armored or protected, its

switches and wiring would always be vulnerable to the impact of

atomic bombs. A nuclear attack would reduce any

conceivable network to tatters.

And how would the network itself be commanded and

controlled? Any central authority, any network central citadel, would

be an obvious and immediate target for an enemy missile. The

center of the network would be the very first place to go.

RAND mulled over this grim puzzle in deep military secrecy,

and arrived at a daring solution. The RAND proposal (the brainchild

of RAND staffer Paul Baran) was made public in 1964. In the first

place, the network would *have no central authority.* Furthermore,

it would be *designed from the beginning to operate while

in tatters.*

The principles were simple. The network itself would be

assumed to be unreliable at all times. It would be designed from the

get-go to transcend its own unreliability. All the nodes in the network

would be equal in status to all other nodes, each node with its own

authority to originate, pass, and receive messages. The

messages themselves would be divided into packets, each packet

separately addressed. Each packet would begin at some specified

source node, and end at some other specified destination node. Each

packet would wind its way through the network on an individual

basis.

The particular route that the packet took would be unimportant.

Only final results would count. Basically, the packet would be tossed

like a hot potato from node to node to node, more or less in the

direction of its destination, until it ended up in the proper place. If

big pieces of the network had been blown away, that simply

wouldn't matter; the packets would still stay airborne, lateralled

wildly across the field by whatever nodes happened to survive. This

rather haphazard delivery system might be "inefficient" in the usual

sense (especially compared to, say, the telephone system) -- but it

would be extremely rugged.

During the 60s, this intriguing concept of a decentralized,

blastproof, packet-switching network was kicked around by RAND,

MIT and UCLA. The National Physical Laboratory in Great Britain set

up the first test network on these principles in 1968. Shortly

afterward, the Pentagon's Advanced Research Projects Agency decided

to fund a larger, more ambitious project in the USA. The nodes of the

network were to be high-speed supercomputers (or what passed for

supercomputers at the time). These were rare and valuable machines

which were in real need of good solid networking, for the sake of

national research-and-development projects.

In fall 1969, the first such node was installed in UCLA. By

December 1969, there were four nodes on the infant network, which

was named ARPANET, after its Pentagon sponsor.

The four computers could transfer data on dedicated high-

speed transmission lines. They could even be programmed remotely

from the other nodes. Thanks to ARPANET, scientists and researchers

could share one another's computer facilities by long-distance. This

was a very handy service, for computer-time was precious in the

early '70s. In 1971 there were fifteen nodes in ARPANET; by 1972,

thirty-seven nodes. And it was good.

By the second year of operation, however, an odd fact became

clear. ARPANET's users had warped the computer-sharing network

into a dedicated, high-speed, federally subsidized electronic post-

office. The main traffic on ARPANET was not long-distance computing.

Instead, it was news and personal messages. Researchers were using

ARPANET to collaborate on projects, to trade notes on work,

and eventually, to downright gossip and schmooze. People had their

own personal user accounts on the ARPANET computers, and their

own personal addresses for electronic mail. Not only were they using

ARPANET for person-to-person communication, but they were very

enthusiastic about this particular service -- far more enthusiastic than

they were about long-distance computation.

It wasn't long before the invention of the mailing-list, an

ARPANET broadcasting technique in which an identical message could

be sent automatically to large numbers of network subscribers.

Interestingly, one of the first really big mailing-lists was "SF-

LOVERS," for science fiction fans. Discussing science fiction on

the network was not work-related and was frowned upon by many

ARPANET computer administrators, but this didn't stop it from

happening.

Throughout the '70s, ARPA's network grew. Its decentralized

structure made expansion easy. Unlike standard corporate computer

networks, the ARPA network could accommodate many different

kinds of machine. As long as individual machines could speak the

packet-switching lingua franca of the new, anarchic network, their

brand-names, and their content, and even their ownership, were

irrelevant.

The ARPA's original standard for communication was known as

NCP, "Network Control Protocol," but as time passed and the technique

advanced, NCP was superceded by a higher-level, more sophisticated

standard known as TCP/IP. TCP, or "Transmission Control Protocol,"

converts messages into streams of packets at the source, then

reassembles them back into messages at the destination. IP, or

"Internet Protocol," handles the addressing, seeing to it that packets

are routed across multiple nodes and even across multiple networks

with multiple standards -- not only ARPA's pioneering NCP standard,

but others like Ethernet, FDDI, and X.25.

As early as 1977, TCP/IP was being used by other networks to

link to ARPANET. ARPANET itself remained fairly tightly controlled,

at least until 1983, when its military segment broke off and became

MILNET. But TCP/IP linked them all. And ARPANET itself, though it

was growing, became a smaller and smaller neighborhood amid the

vastly growing galaxy of other linked machines.

As the '70s and '80s advanced, many very different social

groups found themselves in possession of powerful computers. It was

fairly easy to link these computers to the growing network-of-

networks. As the use of TCP/IP became more common, entire other

networks fell into the digital embrace of the Internet, and

messily adhered. Since the software called TCP/IP was public-domain,

and the basic technology was decentralized and rather anarchic by its

very nature, it was difficult to stop people from barging in and

linking up somewhere-or-other. In point of fact, nobody *wanted* to

stop them from joining this branching complex of networks, which

came to be known as the "Internet."

Connecting to the Internet cost the taxpayer little or nothing,

since each node was independent, and had to handle its own financing

and its own technical requirements. The more, the merrier. Like the

phone network, the computer network became steadily more valuable

as it embraced larger and larger territories of people and resources.

A fax machine is only valuable if *everybody else* has a fax

machine. Until they do, a fax machine is just a curiosity. ARPANET,

too, was a curiosity for a while. Then computer-networking became

an utter necessity.

In 1984 the National Science Foundation got into the act,

through its Office of Advanced Scientific Computing. The new NSFNET

set a blistering pace for technical advancement, linking newer, faster,

shinier supercomputers, through thicker, faster links, upgraded and

expanded, again and again, in 1986, 1988, 1990. And other

government agencies leapt in: NASA, the National Institutes of Health,

the Department of Energy, each of them maintaining a digital satrapy

in the Internet confederation.

The nodes in this growing network-of-networks were divvied

up into basic varieties. Foreign computers, and a few American ones,

chose to be denoted by their geographical locations. The others were

grouped by the six basic Internet "domains": gov, mil, edu, com, org

and net. (Graceless abbreviations such as this are a standard

feature of the TCP/IP protocols.) Gov, Mil, and Edu denoted

governmental, military and educational institutions, which were, of

course, the pioneers, since ARPANET had begun as a high-tech

research exercise in national security. Com, however, stood

for "commercial" institutions, which were soon bursting into the

network like rodeo bulls, surrounded by a dust-cloud of eager

nonprofit "orgs." (The "net" computers served as gateways between

networks.)

ARPANET itself formally expired in 1989, a happy victim of its

own overwhelming success. Its users scarcely noticed, for ARPANET's

functions not only continued but steadily improved. The use of

TCP/IP standards for computer networking is now global. In 1971, a

mere twenty-one years ago, there were only four nodes in the

ARPANET network. Today there are tens of thousands of nodes in

the Internet, scattered over forty-two countries, with more coming

on-line every day. Three million, possibly four million people use

this gigantic mother-of-all-computer-networks.

The Internet is especially popular among scientists, and is

probably the most important scientific instrument of the late

twentieth century. The powerful, sophisticated access that it

provides to specialized data and personal communication

has sped up the pace of scientific research enormously.

The Internet's pace of growth in the early 1990s is spectacular,

almost ferocious. It is spreading faster than cellular phones, faster

than fax machines. Last year the Internet was growing at a rate of

twenty percent a *month.* The number of "host" machines with direct

connection to TCP/IP has been doubling every year since

1988. The Internet is moving out of its original base in military and

research institutions, into elementary and high schools, as well as into

public libraries and the commercial sector.

Why do people want to be "on the Internet?" One of the main

reasons is simple freedom. The Internet is a rare example of a true,

modern, functional anarchy. There is no "Internet Inc." There are

no official censors, no bosses, no board of directors, no stockholders.

In principle, any node can speak as a peer to any other node, as long

as it obeys the rules of the TCP/IP protocols, which are strictly

technical, not social or political. (There has been some struggle over

commercial use of the Internet, but that situation is changing as

businesses supply their own links).

The Internet is also a bargain. The Internet as a whole, unlike

the phone system, doesn't charge for long-distance service. And

unlike most commercial computer networks, it doesn't charge for

access time, either. In fact the "Internet" itself, which doesn't even

officially exist as an entity, never "charges" for anything. Each group

of people accessing the Internet is responsible for their own machine

and their own section of line.

The Internet's "anarchy" may seem strange or even unnatural,

but it makes a certain deep and basic sense. It's rather like the

"anarchy" of the English language. Nobody rents English, and nobody

owns English. As an English-speaking person, it's up to you to learn

how to speak English properly and make whatever use you please

of it (though the government provides certain subsidies to help you

learn to read and write a bit). Otherwise, everybody just sort of

pitches in, and somehow the thing evolves on its own, and somehow

turns out workable. And interesting. Fascinating, even. Though a lot

of people earn their living from using and exploiting and teaching

English, "English" as an institution is public property, a public good.

Much the same goes for the Internet. Would English be improved if

the "The English Language, Inc." had a board of directors and a chief

executive officer, or a President and a Congress? There'd probably be

a lot fewer new words in English, and a lot fewer new ideas.

People on the Internet feel much the same way about their own

institution. It's an institution that resists institutionalization. The

Internet belongs to everyone and no one.

Still, its various interest groups all have a claim. Business

people want the Internet put on a sounder financial footing.

Government people want the Internet more fully regulated.

Academics want it dedicated exclusively to scholarly research.

Military people want it spy-proof and secure. And so on and so on.

All these sources of conflict remain in a stumbling balance

today, and the Internet, so far, remains in a thrivingly anarchical

condition. Once upon a time, the NSFnet's high-speed, high-capacity

lines were known as the "Internet Backbone," and their owners could

rather lord it over the rest of the Internet; but today there are

"backbones" in Canada, Japan, and Europe, and even privately owned

commercial Internet backbones specially created for carrying business

traffic. Today, even privately owned desktop computers can become

Internet nodes. You can carry one under your arm. Soon, perhaps, on

your wrist.

But what does one *do* with the Internet? Four things,

basically: mail, discussion groups, long-distance computing, and file

transfers.

Internet mail is "e-mail," electronic mail, faster by several

orders of magnitude than the US Mail, which is scornfully known by

Internet regulars as "snailmail." Internet mail is somewhat like fax.

It's electronic text. But you don't have to pay for it (at least not

directly), and it's global in scope. E-mail can also send software and

certain forms of compressed digital imagery. New forms of mail are in

the works.

The discussion groups, or "newsgroups," are a world of their

own. This world of news, debate and argument is generally known as

"USENET. " USENET is, in point of fact, quite different from the

Internet. USENET is rather like an enormous billowing crowd of

gossipy, news-hungry people, wandering in and through the

Internet on their way to various private backyard barbecues.

USENET is not so much a physical network as a set of social

conventions. In any case, at the moment there are some 2,500

separate newsgroups on USENET, and their discussions generate about

7 million words of typed commentary every single day. Naturally

there is a vast amount of talk about computers on USENET, but the

variety of subjects discussed is enormous, and it's growing larger all

the time. USENET also distributes various free electronic journals and

publications.

Both netnews and e-mail are very widely available, even

outside the high-speed core of the Internet itself. News and e-mail

are easily available over common phone-lines, from Internet fringe-

realms like BITnet, UUCP and Fidonet. The last two Internet services,

long-distance computing and file transfer, require what is known as

"direct Internet access" -- using TCP/IP.

Long-distance computing was an original inspiration for

ARPANET and is still a very useful service, at least for some.

Programmers can maintain accounts on distant, powerful computers,

run programs there or write their own. Scientists can make use of

powerful supercomputers a continent away. Libraries offer their

electronic card catalogs for free search. Enormous CD-ROM catalogs

are increasingly available through this service. And there are

fantastic amounts of free software available.

File transfers allow Internet users to access remote machines

and retrieve programs or text. Many Internet computers -- some

two thousand of them, so far -- allow any person to access them

anonymously, and to simply copy their public files, free of charge.

This is no small deal, since entire books can be transferred through

direct Internet access in a matter of minutes. Today, in 1992, there

are over a million such public files available to anyone who asks for

them (and many more millions of files are available to people with

accounts). Internet file-transfers are becoming a new form of

publishing, in which the reader simply electronically copies the work

on demand, in any quantity he or she wants, for free. New Internet

programs, such as "archie," "gopher," and "WAIS," have been

developed to catalog and explore these enormous archives of

material.

The headless, anarchic, million-limbed Internet is spreading like

bread-mold. Any computer of sufficient power is a potential spore

for the Internet, and today such computers sell for less than $2,000

and are in the hands of people all over the world. ARPA's network,

designed to assure control of a ravaged society after a nuclear

holocaust, has been superceded by its mutant child the Internet,

which is thoroughly out of control, and spreading exponentially

through the post-Cold War electronic global village. The spread of

the Internet in the 90s resembles the spread of personal

computing in the 1970s, though it is even faster and perhaps more

important. More important, perhaps, because it may give those

personal computers a means of cheap, easy storage and access that is

truly planetary in scale.

The future of the Internet bids fair to be bigger and

exponentially faster. Commercialization of the Internet is a very hot

topic today, with every manner of wild new commercial information-

service promised. The federal government, pleased with an unsought

success, is also still very much in the act. NREN, the National Research

and Education Network, was approved by the US Congress in fall

1991, as a five-year, $2 billion project to upgrade the Internet

"backbone." NREN will be some fifty times faster than the fastest

network available today, allowing the electronic transfer of the entire

Encyclopedia Britannica in one hot second. Computer networks

worldwide will feature 3-D animated graphics, radio and cellular

phone-links to portable computers, as well as fax, voice, and high-

definition television. A multimedia global circus!

Or so it's hoped -- and planned. The real Internet of the

future may bear very little resemblance to today's plans. Planning

has never seemed to have much to do with the seething, fungal

development of the Internet. After all, today's Internet bears

little resemblance to those original grim plans for RAND's post-

holocaust command grid. It's a fine and happy irony.

How does one get access to the Internet? Well -- if you don't

have a computer and a modem, get one. Your computer can act as a

terminal, and you can use an ordinary telephone line to connect to an

Internet-linked machine. These slower and simpler adjuncts to the

Internet can provide you with the netnews discussion groups and

your own e-mail address. These are services worth having -- though

if you only have mail and news, you're not actually "on the Internet"

proper.

If you're on a campus, your university may have direct

"dedicated access" to high-speed Internet TCP/IP lines. Apply for an

Internet account on a dedicated campus machine, and you may be

able to get those hot-dog long-distance computing and file-transfer

functions. Some cities, such as Cleveland, supply "freenet"

community access. Businesses increasingly have Internet access, and

are willing to sell it to subscribers. The standard fee is about $40 a

month -- about the same as TV cable service.

As the Nineties proceed, finding a link to the Internet will

become much cheaper and easier. Its ease of use will also improve,

which is fine news, for the savage UNIX interface of TCP/IP leaves

plenty of room for advancements in user-friendliness. Learning the

Internet now, or at least learning about it, is wise. By the

turn of the century, "network literacy," like "computer literacy"

before it, will be forcing itself into the very texture of your life.

For Further Reading:

The Whole Internet Catalog & User's Guide by Ed Krol. (1992) O'Reilly

and Associates, Inc. A clear, non-jargonized introduction to the

intimidating business of network literacy. Many computer-

documentation manuals attempt to be funny. Mr. Krol's book is

*actually* funny.

The Matrix: Computer Networks and Conferencing Systems Worldwide.

by John Quarterman. Digital Press: Bedford, MA. (1990) Massive and

highly technical compendium detailing the mind-boggling scope and

complexity of our newly networked planet.

The Internet Companion by Tracy LaQuey with Jeanne C. Ryer (1992)

Addison Wesley. Evangelical etiquette guide to the Internet featuring

anecdotal tales of life-changing Internet experiences. Foreword by

Senator Al Gore.

Zen and the Art of the Internet: A Beginner's Guide by Brendan P.

Kehoe (1992) Prentice Hall. Brief but useful Internet guide with

plenty of good advice on useful machines to paw over for data. Mr

Kehoe's guide bears the singularly wonderful distinction of being

available in electronic form free of charge. I'm doing the same

with all my F&SF Science articles, including, of course, this one. My

own Internet address is [email protected].

Bruce Sterling

[email protected]

Literary Freeware -- Not for Commercial Use

From THE MAGAZINE OF FANTASY AND SCIENCE FICTION, April 1993.

F&SF, Box 56, Cornwall CT 06753 $26/yr USA $31/yr other

F&SF Science Column #6:

"Magnetic Vision"

Here on my desk I have something that can only be described as

miraculous. It's a big cardboard envelope with nine thick sheets of

black plastic inside, and on these sheets are pictures of my own brain.

These images are "MRI scans" -- magnetic resonance imagery from

a medical scanner.

These are magnetic windows into the lightless realm inside my

skull. The meat, bone, and various gristles within my head glow gently

in crisp black-and-white detail. There's little of the foggy ghostliness

one sees with, say, dental x-rays. Held up against a bright light, or

placed on a diagnostic light table, the dark plastic sheets reveal veins,

arteries, various odd fluid-stuffed ventricles, and the spongey wrinkles

of my cerebellum. In various shots, I can see the pulp within my own

teeth, the roots of my tongue, the boney caverns of my sinuses, and the

nicely spherical jellies that are my two eyeballs. I can see that the

human brain really does come in two lobes and in three sections, and

that it has gray matter and white matter. The brain is a big whopping

gland, basically, and it fills my skull just like the meat of a walnut.

It's an odd experience to look long and hard at one's own brain.

Though it's quite a privilege to witness this, it's also a form of

narcissism without much historical parallel. Frankly, I don't think I

ever really believed in my own brain until I saw these images. At least,

I never truly comprehended my brain as a tangible physical organ, like

a knuckle or a kneecap. And yet here is the evidence, laid out

irrefutably before me, pixel by monochrome pixel, in a large variety of

angles and in exquisite detail. And I'm told that my brain is quite

healthy and perfectly normal -- anatomically at least. (For a science

fiction writer this news is something of a letdown.)

The discovery of X-rays in 1895, by Wilhelm Roentgen, led to the

first technology that made human flesh transparent. Nowadays, X-rays

can pierce the body through many different angles to produce a

graphic three-dimensional image. This 3-D technique, "Computerized

Axial Tomography" or the CAT-scan, won a Nobel Prize in 1979 for its

originators, Godfrey Hounsfield and Allan Cormack.

Sonography uses ultrasound to study human tissue through its

reflection of high-frequency vibration: sonography is a sonic window.

Magnetic resonance imaging, however, is a more sophisticated

window yet. It is rivalled only by the lesser-known and still rather

experimental PET-scan, or Positron Emission Tomography. PET-

scanning requires an injection of radioactive isotopes into the body so

that their decay can be tracked within human tissues. Magnetic

resonance, though it is sometimes known as Nuclear Magnetic

Resonance, does not involve radioactivity.

The phenomenon of "nuclear magnetic resonance" was

discovered in 1946 by Edward Purcell of Harvard, and Felix Block of

Stanford. Purcell and Block were working separately, but published

their findings within a month of one another. In 1952, Purcell and

Block won a joint Nobel Prize for their discovery.

If an atom has an odd number of protons and neutrons, it will

have what is known as a "magnetic moment:" it will spin, and its axis

will tilt in a certain direction. When that tilted nucleus is put into a

magnetic field, the axis of the tilt will change, and the nucleus will also

wobble at a certain speed. If radio waves are then beamed at the

wobbling nucleus at just the proper wavelength, they will cause the

wobbling to intensify -- this is the "magnetic resonance" phenomenon.

The resonant frequency is known as the Larmor frequency, and the

Larmor frequencies vary for different atoms.

Hydrogen, for instance, has a Larmor frequency of 42.58

megahertz. Hydrogen, which is a major constituent of water and of

carbohydrates such as fat, is very common in the human body. If radio

waves at this Larmor frequency are beamed into magnetized hydrogen

atoms, the hydrogen nuclei will absorb the resonant energy until they

reach a state of excitation. When the beam goes off, the hydrogen

nuclei will relax again, each nucleus emitting a tiny burst of radio

energy as it returns to its original state. The nuclei will also relax at

slightly different rates, depending on the chemical circumstances

around the hydrogen atom. Hydrogen behaves differently in different

kinds of human tissue. Those relaxation bursts can be detected, and

timed, and mapped.

The enormously powerful magnetic field within an MRI machine

can permeate the human body; but the resonant Larmor frequency is

beamed through the body in thin, precise slices. The resulting images

are neat cross-sections through the body. Unlike X-rays, magnetic

resonance doesn't ionize and possibly damage human cells. Instead, it

gently coaxes information from many different types of tissue, causing

them to emit tell-tale signals about their chemical makeup. Blood, fat,

bones, tendons, all emit their own characteristics, which a computer

then reassembles as a graphic image on a computer screen, or prints

out on emulsion-coated plastic sheets.

An X-ray is a marvelous technology, and a CAT-scan more

marvelous yet. But an X-ray does have limits. Bones cast shadows in X-

radiation, making certain body areas opaque or difficult to read. And X-

ray images are rather stark and anatomical; an X-ray image cannot

even show if the patient is alive or dead. An MRI scan, on the other

hand, will reveal a great deal about the composition and the health of

living tissue. For instance, tumor cells handle their fluids differently

than normal tissue, giving rise to a slightly different set of signals. The

MRI machine itself was originally invented as a cancer detector.

After the 1946 discovery of magnetic resonance, MRI techniques

were used for thirty years to study small chemical samples. However, a

cancer researcher, Dr. Raymond Damadian, was the first to build an MRI

machine large enough and sophisticated enough to scan an entire

human body, and then produce images from that scan. Many scientists,

most of them even, believed and said that such a technology was decades

away, or even technically impossible. Damadian had a tough,

prolonged struggle to find funding for for his visionary technique, and

he was often dismissed as a zealot, a crackpot, or worse. Damadian's

struggle and eventual triumph is entertainingly detailed in his 1985

biography, A MACHINE CALLED INDOMITABLE.

Damadian was not much helped by his bitter and public rivalry

with his foremost competitor in the field, Paul Lauterbur. Lauterbur,

an industrial chemist, was the first to produce an actual magnetic-

resonance image, in 1973. But Damadian was the more technologically

ambitious of the two. His machine, "Indomitable," (now in the

Smithsonian Museum) produced the first scan of a human torso, in 1977.

(As it happens, it was Damadian's own torso.) Once this proof-of-

concept had been thrust before a doubting world, Damadian founded a

production company, and became the father of the MRI scanner

industry.

By the end of the 1980s, medical MRI scanning had become a

major enterprise, and Damadian had won the National Medal of

Technology, along with many other honors. As MRI machines spread

worldwide, the market for CAT-scanning began to slump in comparison.

Today, MRI is a two-billion dollar industry, and Dr Damadian and his

company, Fonar Corporation, have reaped the fruits of success. (Some

of those fruits are less sweet than others: today Damadian and Fonar

Corp. are suing Hitachi and General Electric in federal court, for

alleged infringement of Damadian's patents.)

MRIs are marvelous machines -- perhaps, according to critics, a

little too marvelous. The magnetic fields emitted by MRIs are extremely

strong, strong enough to tug wheelchairs across the hospital floor, to

wipe the data off the magnetic strips in credit cards, and to whip a

wrench or screwdriver out of one's grip and send it hurtling across the

room. If the patient has any metal imbedded in his skin -- welders and

machinists, in particular, often do have tiny painless particles of

shrapnel in them -- then these bits of metal will be wrenched out of the

patient's flesh, producing a sharp bee-sting sensation. And in the

invisible grip of giant magnets, heart pacemakers can simply stop.

MRI machines can weigh ten, twenty, even one hundred tons.

And they're big -- the scanning cavity, in which the patient is inserted,

is about the size and shape of a sewer pipe, but the huge plastic hull

surrounding that cavity is taller than a man and longer than a plush

limo. A machine of that enormous size and weight cannot be moved

through hospital doors; instead, it has to be delivered by crane, and its

shelter constructed around it. That shelter must not have any iron

construction rods in it or beneath its floor, for obvious reasons. And yet

that floor had better be very solid indeed.

Superconductive MRIs present their own unique hazards. The

superconductive coils are supercooled with liquid helium.

Unfortunately there's an odd phenomenon known as "quenching," in

which a superconductive magnet, for reasons rather poorly understood,

will suddenly become merely-conductive. When a "quench" occurs, an

enormous amount of electrical energy suddenly flashes into heat,

which makes the liquid helium boil violently. The MRI's technicians

might be smothered or frozen by boiling helium, so it has to be vented

out the roof, requiring the installation of specialized vent-stacks.

Helium leaks, too, so it must be resupplied frequently, at considerable

expense.

The MRI complex also requires expensive graphic-processing

computers, CRT screens, and photographic hard-copy devices. Some

scanners feature elaborate telecommunications equipment. Like the

giant scanners themselves, all these associated machines require

power-surge protectors, line conditioners, and backup power supplies.

Fluorescent lights, which produce radio-frequency noise pollution, are

forbidden around MRIs. MRIs are also very bothered by passing CB

radios, paging systems, and ambulance transmissions. It is generally

considered a good idea to sheathe the entire MRI cubicle (especially the

doors, windows, electrical wiring, and plumbing) in expensive, well-

grounded sheet-copper.

Despite all these drawbacks, the United States today rejoices in

possession of some two thousand MRI machines. (There are hundreds in

other countries as well.) The cheaper models cost a solid million dollars

each; the top-of-the-line models, two million. Five million MRI scans

were performed in the United States last year, at prices ranging from

six hundred dollars, to twice that price and more.

In other words, in 1991 alone, Americans sank some five billion

dollars in health care costs into the miraculous MRI technology.

Today America's hospitals and diagnostic clinics are in an MRI

arms race. Manufacturers constantly push new and improved machines

into the market, and other hospitals feel a dire need to stay with the

state-of-the-art. They have little choice in any case, for the balky,

temperamental MRI scanners wear out in six years or less, even when

treated with the best of care.

Patients have little reason to refuse an MRI test, since insurance

will generally cover the cost. MRIs are especially good for testing for

neurological conditions, and since a lot of complaints, even quite minor

ones, might conceivably be neurological, a great many MRI scans are

performed. The tests aren't painful, and they're not considered risky.

Having one's tissues briefly magnetized is considered far less risky than

the fairly gross ionization damage caused by X-rays. The most common

form of MRI discomfort is simple claustrophobia. MRIs are as narrow as

the grave, and also very loud, with sharp mechanical clacking and

buzzing.

But the results are marvels to behold, and MRIs have clearly

saved many lives. And the tests will eliminate some potential risks to

the patient, and put the physician on surer ground with his diagnosis.

So why not just go ahead and take the test?

MRIs have gone ahead boldly. Unfortunately, miracles rarely

come cheap. Today the United States spends thirteen percent of its Gross

National Product on health care, and health insurance costs are

drastically outstripping the rate of inflation.

High-tech, high-cost resources such as MRIs generally go to to

the well-to-do and the well-insured. This practice has sad

repercussions. While some lives are saved by technological miracles --

and this is a fine thing -- other lives are lost, that might have been

rescued by fairly cheap and common public-health measures, such as

better nutrition, better sanitation, or better prenatal care. As advanced

nations go, the United States a rather low general life expectancy, and a

quite bad infant-death rate; conspicuously worse, for instance, than

Italy, Japan, Germany, France, and Canada.

MRI may be a true example of a technology genuinely ahead of

its time. It may be that the genius, grit, and determination of Raymond

Damadian brought into the 1980s a machine that might have been better

suited to the technical milieu of the 2010s. What MRI really requires for

everyday workability is some cheap, simple, durable, powerful

superconductors. Those are simply not available today, though they

would seem to be just over the technological horizon. In the meantime,

we have built thousands of magnetic windows into the body that will do

more or less what CAT-scan x-rays can do already. And though they do

it better, more safely, and more gently than x-rays can, they also do it

at a vastly higher price.

Damadian himself envisioned MRIs as a cheap mass-produced

technology. "In ten to fifteen years," he is quoted as saying in 1985,

"we'll be able to step into a booth -- they'll be in shopping malls or

department stores -- put a quarter in it, and in a minute it'll say you

need some Vitamin A, you have some bone disease over here, your blood

pressure is a touch high, and keep a watch on that cholesterol." A

thorough medical checkup for twenty-five cents in 1995! If one needed

proof that Raymond Damadian was a true visionary, one could find it

here.

Damadian even envisioned a truly advanced MRI machine

capable of not only detecting cancer, but of killing cancerous cells

outright. These machines would excite not hydrogen atoms, but

phosphorus atoms, common in cancer-damaged DNA. Damadian

speculated that certain Larmor frequencies in phosphorus might be

specific to cancerous tissue; if that were the case, then it might be

possible to pump enough energy into those phosphorus nuclei so that

they actually shivered loose from the cancer cell's DNA, destroying the

cancer cell's ability to function, and eventually killing it.

That's an amazing thought -- a science-fictional vision right out

of the Gernsback Continuum. Step inside the booth -- drop a quarter --

and have your incipient cancer not only diagnosed, but painlessly

obliterated by invisible Magnetic Healing Rays.

Who the heck could believe a visionary scenario like that?

Some things are unbelievable until you see them with your own

eyes. Until the vision is sitting right there in front of you. Where it

can no longer be denied that they're possible.

A vision like the inside of your own brain, for instance.

Bruce Sterling

[email protected]

LITERARY FREEWARE: NOT FOR COMMERCIAL USE

From THE MAGAZINE OF FANTASY AND SCIENCE FICTION, June 1993.

F&SF, Box 56, Cornwall CT 06753 $26/yr USA $31/yr other

F&SF Science Column #7:

SUPERGLUE

This is the Golden Age of Glue.

For thousands of years, humanity got by with natural glues like

pitch, resin, wax, and blood; products of hoof and hide and treesap

and tar. But during the past century, and especially during the past

thirty years, there has been a silent revolution in adhesion.

This stealthy yet steady technological improvement has been

difficult to fully comprehend, for glue is a humble stuff, and the

better it works, the harder it is to notice. Nevertheless, much of the

basic character of our everyday environment is now due to advanced

adhesion chemistry.

Many popular artifacts from the pre-glue epoch look clunky

and almost Victorian today. These creations relied on bolts, nuts,

rivets, pins, staples, nails, screws, stitches, straps, bevels, knobs, and

bent flaps of tin. No more. The popular demand for consumer

objects ever lighter, smaller, cheaper, faster and sleeker has led to

great changes in the design of everyday things.

Glue determines much of the difference between our

grandparent's shoes, with their sturdy leather soles, elaborate

stitching, and cobbler's nails, and the eerie-looking modern jogging-

shoe with its laminated plastic soles, fabric uppers and sleek foam

inlays. Glue also makes much of the difference between the big

family radio cabinet of the 1940s and the sleek black hand-sized

clamshell of a modern Sony Walkman.

Glue holds this very magazine together. And if you happen to

be reading this article off a computer (as you well may), then you

are even more indebted to glue; modern microelectronic assembly

would be impossible without it.

Glue dominates the modern packaging industry. Glue also has

a strong presence in automobiles, aerospace, electronics, dentistry,

medicine, and household appliances of all kinds. Glue infiltrates

grocery bags, envelopes, books, magazines, labels, paper cups, and

cardboard boxes; there are five different kinds of glue in a common

filtered cigarette. Glue lurks invisibly in the structure of our

shelters, in ceramic tiling, carpets, counter tops, gutters, wall siding,

ceiling panels and floor linoleum. It's in furniture, cooking utensils,

and cosmetics. This galaxy of applications doesn't even count the

vast modern spooling mileage of adhesive tapes: package tape,

industrial tape, surgical tape, masking tape, electrical tape, duct tape,

plumbing tape, and much, much more.

Glue is a major industrial industry and has been growing at

twice the rate of GNP for many years, as adhesives leak and stick

into areas formerly dominated by other fasteners. Glues also create

new markets all their own, such as Post-it Notes (first premiered in

April 1980, and now omnipresent in over 350 varieties).

The global glue industry is estimated to produce about twelve

billion pounds of adhesives every year. Adhesion is a $13 billion

market in which every major national economy has a stake. The

adhesives industry has its own specialty magazines, such as

Adhesives Age andSAMPE Journal; its own trade groups, like the

Adhesives Manufacturers Association, The Adhesion Society, and the

Adhesives and Sealant Council; and its own seminars, workshops and

technical conferences. Adhesives corporations like 3M, National

Starch, Eastman Kodak, Sumitomo, and Henkel are among the world's

most potent technical industries.

Given all this, it's amazing how little is definitively known

about how glue actually works -- the actual science of adhesion.

There are quite good industrial rules-of-thumb for creating glues;

industrial technicians can now combine all kinds of arcane

ingredients to design glues with well-defined specifications:

qualities such as shear strength, green strength, tack, electrical

conductivity, transparency, and impact resistance. But when it

comes to actually describing why glue is sticky, it's a different

matter, and a far from simple one.

A good glue has low surface tension; it spreads rapidly and

thoroughly, so that it will wet the entire surface of the substrate.

Good wetting is a key to strong adhesive bonds; bad wetting leads

to problems like "starved joints," and crannies full of trapped air,

moisture, or other atmospheric contaminants, which can weaken the

bond.

But it is not enough just to wet a surface thoroughly; if that

were the case, then water would be a glue. Liquid glue changes

form; it cures, creating a solid interface between surfaces that

becomes a permanent bond.

The exact nature of that bond is pretty much anybody's guess.

There are no less than four major physico-chemical theories about

what makes things stick: mechanical theory, adsorption theory,

electrostatic theory and diffusion theory. Perhaps molecular strands

of glue become physically tangled and hooked around irregularities

in the surface, seeping into microscopic pores and cracks. Or, glue

molecules may be attracted by covalent bonds, or acid-base

interactions, or exotic van der Waals forces and London dispersion

forces, which have to do with arcane dipolar resonances between

magnetically imbalanced molecules. Diffusion theorists favor the

idea that glue actually blends into the top few hundred molecules of

the contact surface.

Different glues and different substrates have very different

chemical constituents. It's likely that all of these processes may have

something to do with the nature of what we call "stickiness" -- that

everybody's right, only in different ways and under different

circumstances.

In 1989 the National Science Foundation formally established

the Center for Polymeric Adhesives and Composites. This Center's

charter is to establish "a coherent philosophy and systematic

methodology for the creation of new and advanced polymeric

adhesives" -- in other words, to bring genuine detailed scientific

understanding to a process hitherto dominated by industrial rules of

thumb. The Center has been inventing new adhesion test methods

involving vacuum ovens, interferometers, and infrared microscopes,

and is establishing computer models of the adhesion process. The

Center's corporate sponsors -- Amoco, Boeing, DuPont, Exxon,

Hoechst Celanese, IBM, Monsanto, Philips, and Shell, to name a few of

them -- are wishing them all the best.

We can study the basics of glue through examining one typical

candidate. Let's examine one well-known superstar of modern

adhesion: that wondrous and well-nigh legendary substance known

as "superglue." Superglue, which also travels under the aliases of

SuperBonder, Permabond, Pronto, Black Max, Alpha Ace, Krazy Glue

and (in Mexico) Kola Loka, is known to chemists as cyanoacrylate

(C5H5NO2).

Cyanoacrylate was first discovered in 1942 in a search for

materials to make clear plastic gunsights for the second world war.

The American researchers quickly rejected cyanoacrylate because

the wretched stuff stuck to everything and made a horrible mess. In

1951, cyanoacrylate was rediscovered by Eastman Kodak researchers

Harry Coover and Fred Joyner, who ruined a perfectly useful

refractometer with it -- and then recognized its true potential.

Cyanoacrylate became known as Eastman compound #910. Eastman

910 first captured the popular imagination in 1958, when Dr Coover

appeared on the "I've Got a Secret" TV game show and lifted host

Gary Moore off the floor with a single drop of the stuff.

This stunt still makes very good television and cyanoacrylate

now has a yearly commercial market of $325 million.

Cyanoacrylate is an especially lovely and appealing glue,

because it is (relatively) nontoxic, very fast-acting, extremely strong,

needs no other mixer or catalyst, sticks with a gentle touch, and does

not require any fancy industrial gizmos such as ovens, presses, vices,

clamps, or autoclaves. Actually, cyanoacrylate does require a

chemical trigger to cause it to set, but with amazing convenience, that

trigger is the hydroxyl ions in common water. And under natural

atmospheric conditions, a thin layer of water is naturally present on

almost any surface one might want to glue.

Cyanoacrylate is a "thermosetting adhesive," which means that

(unlike sealing wax, pitch, and other "hot melt" adhesives) it cannot

be heated and softened repeatedly. As it cures and sets,

cyanoacrylate becomes permanently crosslinked, forming a tough

and permanent polymer plastic.

In its natural state in its native Superglue tube from the

convenience store, a molecule of cyanoacrylate looks something like

this:

CN

/

CH2=C

\

COOR

The R is a variable (an "alkyl group") which slightly changes

the character of the molecule; cyanoacrylate is commercially

available in ethyl, methyl, isopropyl, allyl, butyl, isobutyl,

methoxyethyl, and ethoxyethyl cyanoacrylate esters. These

chemical variants have slightly different setting properties and

degrees of gooiness.

After setting or "ionic polymerization," however, Superglue

looks something like this:

CN CN CN

| | |

- CH2C -(CH2C)-(CH2C)- (etc. etc. etc)

| | |

COOR COOR COOR

The single cyanoacrylate "monomer" joins up like a series of

plastic popper-beads, becoming a long chain. Within the thickening

liquid glue, these growing chains whip about through Brownian

motion, a process technically known as "reptation," named after the

crawling of snakes. As the reptating molecules thrash, then wriggle,

then finally merely twitch, the once- thin and viscous liquid becomes

a tough mass of fossilized, interpenetrating plastic molecular

spaghetti.

And it is strong. Even pure cyanoacrylate can lift a ton with a

single square-inch bond, and one advanced elastomer-modified '80s

mix, "Black Max" from Loctite Corporation, can go up to 3,100 pounds.

This is enough strength to rip the surface right off most substrates.

Unless it's made of chrome steel, the object you're gluing will likely

give up the ghost well before a properly anchored layer of Superglue

will.

Superglue quickly found industrial uses in automotive trim,

phonograph needle cartridges, video cassettes, transformer

laminations, circuit boards, and sporting goods. But early superglues

had definite drawbacks. The stuff dispersed so easily that it

sometimes precipitated as vapor, forming a white film on surfaces

where it wasn't needed; this is known as "blooming." Though

extremely strong under tension, superglue was not very good at

sudden lateral shocks or "shear forces," which could cause the glue-

bond to snap. Moisture weakened it, especially on metal-to-metal

bonds, and prolonged exposure to heat would cook all the strength

out of it.

The stuff also coagulated inside the tube with annoying speed,

turning into a useless and frustrating plastic lump that no amount of

squeezing of pinpoking could budge -- until the tube burst and and

the thin slippery gush cemented one's fingers, hair, and desk in a

mummified membrane that only acetone could cut.

Today, however, through a quiet process of incremental

improvement, superglue has become more potent and more useful

than ever. Modern superglues are packaged with stabilizers and

thickeners and catalysts and gels, improving heat capacity, reducing

brittleness, improving resistance to damp and acids and alkalis.

Today the wicked stuff is basically getting into everything.

Including people. In Europe, superglue is routinely used in

surgery, actually gluing human flesh and viscera to replace sutures

and hemostats. And Superglue is quite an old hand at attaching fake

fingernails -- a practice that has sometimes had grisly consequences

when the tiny clear superglue bottle is mistaken for a bottle of

eyedrops. (I haven't the heart to detail the consequences of this

mishap, but if you're not squeamish you might try consulting The

Journal of the American Medical Association, May 2, 1990 v263 n17

p2301).

Superglue is potent and almost magical stuff, the champion of

popular glues and, in its own quiet way, something of an historical

advent. There is something pleasantly marvelous, almost Arabian

Nights-like, about a drop of liquid that can lift a ton; and yet one can

buy the stuff anywhere today, and it's cheap. There are many urban

legends about terrible things done with superglue; car-doors locked

forever, parking meters welded into useless lumps, and various tales

of sexual vengeance that are little better than elaborate dirty jokes.

There are also persistent rumors of real-life superglue muggings, in

which victims are attached spreadeagled to cars or plate-glass

windows, while their glue-wielding assailants rifle their pockets at

leisure and then stroll off, leaving the victim helplessly immobilized.

While superglue crime is hard to document, there is no

question about its real-life use for law enforcement. The detection

of fingerprints has been revolutionized with special kits of fuming

ethyl-gel cyanoacrylate. The fumes from a ripped-open foil packet of

chemically smoking superglue will settle and cure on the skin oils

left in human fingerprints, turning the smear into a visible solid

object. Thanks to superglue, the lightest touch on a weapon can

become a lump of plastic guilt, cementing the perpetrator to his

crime in a permanent bond.

And surely it would be simple justice if the world's first

convicted superglue mugger were apprehended in just this way.

Bruce Sterling

[email protected]

LITERARY FREEWARE -- NOT FOR COMMERCIAL USE

From THE MAGAZINE OF FANTASY AND SCIENCE FICTION, August 1993.

F&SF, Box 56, Cornwall CT 06753 $26/yr USA $31/yr other

F&SF Science Column #8:

"Creation Science"

In the beginning, all geologists and biologists were creationists.

This was only natural. In the early days of the Western scientific

tradition, the Bible was by far the most impressive and potent source

of historical and scientific knowledge.

The very first Book of the Bible, Genesis, directly treated

matters of deep geological import. Genesis presented a detailed

account of God's creation of the natural world, including the sea, the

sky, land, plants, animals and mankind, from utter nothingness.

Genesis also supplied a detailed account of a second event of

enormous import to geologists: a universal Deluge.

Theology was queen of sciences, and geology was one humble

aspect of "natural theology." The investigation of rocks and the

structure of the landscape was a pious act, meant to reveal the full

glory and intricacy of God's design. Many of the foremost geologists

of the 18th and 19th century were theologians: William Buckland,

John Pye Smith, John Fleming, Adam Sedgewick. Charles Darwin

himself was a one-time divinity student.

Eventually the study of rocks and fossils, meant to complement

the Biblical record, began to contradict it. There were published

rumblings of discontent with the Genesis account as early as the

1730s, but real trouble began with the formidable and direct

challenges of Lyell's uniformitarian theory of geology and his disciple

Darwin's evolution theory in biology. The painstaking evidence

heaped in Lyell's *Principles of Geology* and Darwin's *Origin of

Species* caused enormous controversy, but eventually carried the

day in the scientific community.

But convincing the scientific community was far from the end

of the matter. For "creation science," this was only the beginning.

Most Americans today are "creationists" in the strict sense of

that term. Polls indicate that over 90 percent of Americans believe

that the universe exists because God created it. A Gallup poll in

1991 established that a full 47 percent of the American populace

further believes that God directly created humankind, in the present

human form, less than ten thousand years ago.

So "creationism" is not the view of an extremist minority in our

society -- quite the contrary. The real minority are the fewer than

five percent of Americans who are strictly non-creationist. Rejecting

divine intervention entirely leaves one with few solid or comforting

answers, which perhaps accounts for this view's unpopularity.

Science offers no explanation whatever as to why the universe exists.

It would appear that something went bang in a major fashion about

fifteen billion years ago, but the scientific evidence for that -- the

three-degree background radiation, the Hubble constant and so forth

-- does not at all suggest *why* such an event should have happened

in the first place.

One doesn't necessarily have to invoke divine will to explain

the origin of the universe. One might speculate, for instance, that

the reason there is Something instead of Nothing is because "Nothing

is inherently unstable" and Nothingness simply exploded. There's

little scientific evidence to support such a speculation, however, and

few people in our society are that radically anti-theistic. The

commonest view of the origin of the cosmos is "theistic creationism,"

the belief that the Cosmos is the product of a divine supernatural

action at the beginning of time.

The creationist debate, therefore, has not generally been

between strictly natural processes and strictly supernatural ones, but

over *how much* supernaturalism or naturalism one is willing to

admit into one's worldview.

How does one deal successfully with the dissonance between

the word of God and the evidence in the physical world? Or the

struggle, as Stephen Jay Gould puts it, between the Rock of Ages and

the age of rocks?

Let us assume, as a given, that the Bible as we know it today is

divinely inspired and that there are no mistranslations, errors,

ellipses, or deceptions within the text. Let us further assume that

the account in Genesis is entirely factual and not metaphorical, poetic

or mythical.

Genesis says that the universe was created in six days. This

divine process followed a well-defined schedule.

Day 1. God created a dark, formless void of deep waters, then

created light and separated light from darkness.

Day 2. God established the vault of Heaven over the formless watery

void.

Day 3. God created dry land amidst the waters and established

vegetation on the land.

Day 4. God created the sun, the moon, and the stars, and set them

into the vault of heaven.

Day 5. God created the fish of the sea and the fowl of the air.

Day 6. God created the beasts of the earth and created one male and

one female human being.

On Day 7, God rested.

Humanity thus began on the sixth day of creation. Mankind is

one day younger than birds, two days younger than plants, and

slightly younger than mammals. How are we to reconcile this with

scientific evidence suggesting that the earth is over 4 billion years

old and that life started as a single-celled ooze some three billion

years ago?

The first method of reconciliation is known as "gap theory."

The very first verse of Genesis declares that God created the heaven

and the earth, but God did not establish "Day" and "Night" until the

fifth verse. This suggests that there may have been an immense

span of time, perhaps eons, between the creation of matter and life,

and the beginning of the day-night cycle. Perhaps there were

multiple creations and cataclysms during this period, accounting for

the presence of oddities such as trilobites and dinosaurs, before a

standard six-day Edenic "restoration" around 4,000 BC.

"Gap theory" was favored by Biblical scholar Charles Scofield,

prominent '30s barnstorming evangelist Harry Rimmer, and well-

known modern televangelist Jimmy Swaggart, among others.

The second method of reconciliation is "day-age theory." In

this interpretation, the individual "days" of the Bible are considered

not modern twenty-four hour days, but enormous spans of time.

Day-age theorists point out that the sun was not created until Day 4,

more than halfway through the process. It's difficult to understand

how or why the Earth would have a contemporary 24-hour "day"

without a Sun. The Beginning, therefore, likely took place eons ago,

with matter created on the first "day," life emerging on the third

"day," the fossil record forming during the eons of "days" four five

and six. Humanity, however, was created directly by divine fiat and

did not "evolve" from lesser animals.

Perhaps the best-known "day-age" theorist was William

Jennings Bryan, three-times US presidential candidate and a

prominent figure in the Scopes evolution trial in 1925.

In modern creation-science, however, both gap theory and

day-age theory are in eclipse, supplanted and dominated by "flood

geology." The most vigorous and influential creation-scientists

today are flood geologists, and their views (though not the only

views in creationist doctrine), have become synonymous with the

terms "creation science" and "scientific creationism."

"Flood geology" suggests that this planet is somewhere between

6,000 and 15,000 years old. The Earth was entirely lifeless until the

six literal 24-hour days that created Eden and Adam and Eve. Adam

and Eve were the direct ancestors of all human beings. All fossils,

including so-called pre-human fossils, were created about 3,000 BC

during Noah's Flood, which submerged the entire surface of the Earth

and destroyed all air-breathing life that was not in the Ark (with the

possible exception of air-breathing mammalian sea life). Dinosaurs,

which did exist but are probably badly misinterpreted by geologists,

are only slightly older than the human race and were co-existent

with the patriarchs of the Old Testament. Actually, the Biblical

patriarchs were contemporaries with all the creatures in the fossil

record, including trilobites, pterosaurs, giant ferns, nine-foot sea

scorpions, dragonflies two feet across, tyrannosaurs, and so forth.

The world before the Deluge had a very rich ecology.

Modern flood geology creation-science is a stern and radical

school. Its advocates have not hesitated to carry the war to their

theological rivals. The best known creation-science text (among

hundreds) is probably *The Genesis Flood: The Biblical Record and

its Scientific Implications* by John C. Whitcomb and Henry M.

Morris (1961). Much of this book's argumentative energy is devoted

to demolishing gap theory, and especially, the more popular and

therefore more pernicious day-age theory.

Whitcomb and Morris point out with devastating logic that

plants, created on Day Three, could hardly have been expected to

survive for "eons" without any daylight from the Sun, created on Day

Four. Nor could plants pollinate without bees, moths and butterflies

-- winged creatures that were products of Day Five.

Whitcomb and Morris marshal a great deal of internal Biblical

testimony for the everyday, non-metaphorical, entirely real-life

existence of Adam, Eve, Eden, and Noah's Flood. Jesus Christ Himself

refers to the reality of the Flood in Luke 17, and to the reality of

Adam, Eve, and Eden in Matthew 19.

Creationists have pointed out that without Adam, there is no

Fall; with no Fall, there is no Atonement for original sin; without

Atonement, there can be no Savior. To lack faith in the historical

existence and the crucial role of Adam, therefore, is necessarily to

lack faith in the historical existence and the crucial role of Jesus.

Taken on its own terms, this is a difficult piece of reasoning to refute,

and is typical of Creation-Science analysis.

To these creation-scientists, the Bible is very much all of a

piece. To begin pridefully picking and choosing within God's Word

about what one may or may not choose to believe is to risk an utter

collapse of faith that can only result in apostasy -- "going to the

apes." These scholars are utterly and soberly determined to believe

every word of the Bible, and to use their considerable intelligence to

prove that it is the literal truth about our world and our history as a

species.

Cynics might wonder if this activity were some kind of

elaborate joke, or perhaps a wicked attempt by clever men to garner

money and fame at the expense of gullible fundamentalist

supporters. Any serious study of the lives of prominent Creationists

establishes that this is simply not so. Creation scientists are not

poseurs or hypocrites. Many have spent many patient decades in

quite humble circumstances, often enduring public ridicule, yet still

working selflessly and doggedly in the service of their beliefs.

When they state, for instance, that evolution is inspired by Satan and

leads to pornography, homosexuality, and abortion, they are entirely

in earnest. They are describing what they consider to be clear and

evident facts of life.

Creation-science is not standard, orthodox, respectable science.

There is, and always has been, a lot of debate about what qualities an

orthodox and respectable scientific effort should possess. It can be

stated though that science should have at least two basic

requirements: (A) the scientist should be willing to follow the data

where it leads, rather than bending the evidence to fit some

preconceived rationale, and (B) explanations of phenomena should

not depend on unique or nonmaterial factors. It also helps a lot if

one's theories are falsifiable, reproducible by other researchers,

openly published and openly testable, and free of obvious internal

contradictions.

Creation-science does not fit that description at all. Creation-

science considers it sheer boneheaded prejudice to eliminate

miraculous, unique explanations of world events. After all, God, a

living and omnipotent Supreme Being, is perfectly capable of

directing mere human affairs into any direction He might please. To

simply eliminate divine intervention as an explanation for

phenomena, merely in order to suit the intellectual convenience of

mortal human beings, is not only arrogant and arbitrary, but absurd.

Science has accomplished great triumphs through the use of

purely naturalistic explanations. Over many centuries, hundreds of

scientists have realized that some questions can be successfully

investigated using naturalistic techniques. Questions that cannot be

answered in this way are not science, but instead are philosophy, art,

or theology. Scientists assume as a given that we live in a natural

universe that obeys natural laws.

It's conceivable that this assumption might not be the case.

The entire cognitive structure of science hinges on this assumption of

natural law, but it might not actually be true. It's interesting to

imagine the consequences for science if there were to be an obvious,

public, irrefutable violation of natural law.

Imagine that such a violation took place in the realm of

evolutionary biology. Suppose, for instance, that tonight at midnight

Eastern Standard Time every human being on this planet suddenly

had, not ten fingers, but twelve. Suppose that all our children were

henceforth born with twelve fingers also and we now found

ourselves a twelve-fingered species. This bizarre advent would

violate Neo-Darwinian evolution, many laws of human metabolism,

the physical laws of conservation of mass and energy, and quite a

few other such. If such a thing were to actually happen, we would

simply be wrong about the basic nature of our universe. We

thought we were living in a world where evolution occurred through

slow natural processes of genetic drift, mutation, and survival of the

fittest; but we were mistaken. Where the time had come for our

species to evolve to a twelve-fingered status, we simply did it in an

instant all at once, and that was that.

This would be a shock to the scientific worldview equivalent to

the terrible shock that the Christian worldview has sustained

through geology and Darwinism. If a shock of this sort were to strike

the scientific establishment, it would not be surprising to see

scientists clinging, quite irrationally, to their naturalist principles --

despite the fact that genuine supernaturalism was literally right at

hand. Bizarre rationalizations would surely flourish -- queer

"explanations" that the sixth fingers had somehow grown there

naturally without our noticing, or perhaps that the fingers were mere

illusions and we really had only ten after all, or that we had always

had twelve fingers and that all former evidence that we had once

had ten fingers were evil lies spread by wicked people to confuse us.

The only alternative would be to fully face the terrifying fact that a

parochial notion of "reality" had been conclusively toppled, thereby

robbing all meaning from the lives and careers of scientists.

This metaphor may be helpful in understanding why it is that

Whitcomb and Morris's *Genesis Flood* can talk quite soberly about

Noah storing dinosaurs in the Ark. They would have had to be

*young* dinosaurs, of course.... If we assume that one Biblical cubit

equals 17.5 inches, a standard measure, then the Ark had a volume

of 1,396,000 cubic feet, a carrying capacity equal to that of 522

standard railroad stock cars. Plenty of room!

Many other possible objections to the Ark story are met head-

on, in similar meticulous detail. Noah did not have to search the

earth for wombats, pangolins, polar bears and so on; all animals,

including the exotic and distant ones, were brought through divine

instinct to the site of the Ark for Noah's convenience. It seems

plausible that this divine intervention was, in fact, the beginning of

the migratory instinct in the animal kingdom. Similarly, hibernation

may have been created by God at this time, to keep the thousands of

animals quiet inside the Ark and also reduce the need for gigantic

animal larders that would have overtaxed Noah's crew of eight.

Evidence in the Biblical geneologies shows that pre-Deluge

patriarchs lived far longer than those after the Deluge, suggesting a

radical change in climate, and not for the better. Whitcomb and

Morris make the extent of that change clear by establishing that

before the Deluge it never rained. There had been no rainbows

before the Flood -- Genesis states clearly that the rainbow came into

existence as a sign of God's covenant with Noah. If we assume that

normal diffraction of sunlight by water droplets was still working in

pre-Deluge time (as seems reasonable), then this can only mean that

rainfall did not exist before Noah. Instead, the dry earth was

replenished with a kind of ground-hugging mist (Genesis 2:6).

The waters of the Flood came from two sources: the "fountains

of the great deep" and "the windows of heaven." Flood geologists

interpret this to mean that the Flood waters were subterranean and

also present high in the atmosphere. Before they fell to Earth by

divine fiat, the Flood's waters once surrounded the entire planet in a

"vapor canopy." When the time came to destroy his Creation, God

caused the vapor canopy to fall from outer space until the entire

planet was submerged. That water is still here today; the Earth in

Noah's time was not nearly so watery as it is today, and Noah's seas

were probably much shallower than ours. The vapor canopy may

have shielded the Biblical patriarchs from harmful cosmic radiation

that has since reduced human lifespan well below Methuselah's 969

years.

The laws of physics were far different in Eden. The Second

Law of Thermodynamics likely began with Adam's Fall. The Second

Law of Thermodynamics is strong evidence that the entire Universe

has been in decline since Adam's sin. The Second Law of

Thermodynamics may well end with the return of Jesus Christ.

Noah was a markedly heterozygous individual whose genes had

the entire complement of modern racial characteristics. It is a

fallacy to say that human embryos recapitulate our evolution as a

species. The bumps on human embryos are not actually relic gills,

nor is the "tail" on an embryo an actual tail -- it only resembles one.

Creatures cannot evolve to become more complex because this would

violate the Second Law of Thermodynamics. In our corrupt world,

creatures can only degenerate. The sedimentary rock record was

deposited by the Flood and it is all essentially the same age. The

reason the fossil record appears to show a course of evolution is

because the simpler and cruder organisms drowned first, and were

the first to sift out in the layers of rubble and mud.

Related so baldly and directly, flood geology may seem

laughable, but *The Genesis Flood* is not a silly or comic work. It is

five hundred pages long, and is every bit as sober, straightforward

and serious as, say, a college text on mechanical engineering.

*The Genesis Flood* has sold over 200,000 copies and gone

through 29 printings. It is famous all over the world. Today Henry

M. Morris, its co-author, is the head of the world's most influential

creationist body, the Institute for Creation Research in Santee,

California.

It is the business of the I.C.R. to carry out scientific research on

the physical evidence for creation. Members of the I.C.R. are

accredited scientists, with degrees from reputable mainstream

institutions. Dr. Morris himself has a Ph.D. in engineering and has

written a mainstream textbook on hydraulics. The I.C.R.'s monthly

newsletter, *Acts and Facts,* is distributed to over 100,000 people.

The Institute is supported by private donations and by income from

its frequent seminars and numerous well-received publications.

In February 1993, I called the Institute by telephone and had

an interesting chat with its public relations officer, Mr. Bill Hoesch.

Mr. Hoesch told me about two recent I.C.R. efforts in field research.

The first involves an attempt to demonstrate that lava flows at the

top and the bottom of Arizona's Grand Canyon yield incongruent

ages. If this were proved factual, it would strongly imply that the

thousands of layers of sedimentary rock in this world-famous mile-

deep canyon were in fact all deposited at the same time and that

conventional radiometric methods are, to say the least, gravely

flawed. A second I.C.R. effort should demonstrate that certain ice-

cores from Greenland, which purport to show 160 thousand years of

undisturbed annual snow layers, are in fact only two thousand years

old and have been misinterpreted by mainstream scientists.

Mr. Hoesch expressed some amazement that his Institute's

efforts are poorly and privately funded, while mainstream geologists

and biologists often receive comparatively enormous federal funding.

In his opinion, if the Institute for Creation Research were to receive

equivalent funding with their rivals in uniformitarian and

evolutionary so-called science, then creation-scientists would soon be

making valuable contributions to the nation's research effort.

Other creation scientists have held that the search for oil, gas,

and mineral deposits has been confounded for years by mistaken

scientific orthodoxies. They have suggested that successful flood-

geology study would revolutionize our search for mineral resources

of all kinds.

Orthodox scientists are blinded by their naturalistic prejudices.

Carl Sagan, whom Mr. Hoesch described as a "great hypocrite," is a

case in point. Carl Sagan is helping to carry out a well-funded

search for extraterrestrial life in outer space, despite the fact that

there is no scientific evidence whatsoever for extraterrestrial

intelligence, and there is certainly no mention in the Bible of any

rival covenant with another intelligent species. Worse yet, Sagan

boasts that he could detect an ordered, intelligent signal from space

from the noise and static of mere cosmic debris. But here on earth

we have the massively ordered and intelligently designed "signal"

called DNA, and yet Sagan publicly pretends that DNA is the result of

random processes! If Sagan used the same criteria to distinguish

intelligence from chance in the study of Earth life, as he does in his

search for extraterrestrial life, then he would have to become a

Creationist!

I asked Mr Hoesch what he considered the single most

important argument that his group had to make about scientific

creationism.

"Creation versus evolution is not science versus religion," he

told me. "It's the science of one religion versus the science of

another religion."

The first religion is Christianity; the second, the so-called

religion of Secular Humanism. Creation scientists consider this

message the single most important point they can make; far more

important than so-called physical evidence or the so-called scientific

facts. Creation scientists consider themselves soldiers and moral

entrepreneurs in a battle of world-views. It is no accident, to their

mind, that American schools teach "scientific" doctrines that are

inimical to fundamentalist, Bible-centered Christianity. It is not a

question of value-neutral facts that all citizens in our society should

quietly accept; it is a question of good versus evil, of faith versus

nihilism, of decency versus animal self-indulgence, and of discipline

versus anarchy. Evolution degrades human beings from immortal

souls created in God's Image to bipedal mammals of no more moral

consequence than other apes. People who do not properly value

themselves or others will soon lose their dignity, and then their

freedom.

Science education, for its part, degrades the American school

system from a localized, community-responsible, democratic

institution teaching community values, to an amoral indoctrination-

machine run by remote and uncaring elitist mandarins from Big

Government and Big Science.

Most people in America today are creationists of a sort. Most

people in America today care little if at all about the issue of creation

and evolution. Most people don't really care much if the world is six

billion years old, or six thousand years old, because it doesn't

impinge on their daily lives. Even radical creation-scientists have

done very little to combat the teaching of evolution in higher

education -- university level or above. They are willing to let Big

Science entertain its own arcane nonsense -- as long as they and

their children are left in peace.

But when world-views collide directly, there is no peace. The

first genuine counter-attack against evolution came in the 1920s,

when high-school education suddenly became far more widely

spread. Christian parents were shocked to hear their children

openly contradicting God's Word and they felt they were losing

control of the values taught their youth. Many state legislatures in

the USA outlawed the teaching of evolution in the 1920s.

In 1925, a Dayton, Tennessee high school teacher named John

Scopes deliberately disobeyed the law and taught evolution to his

science class. Scopes was accused of a crime and tried for it, and his

case became a national cause celebre. Many people think the

"Scopes Monkey Trial" was a triumph for science education, and it

was a moral victory in a sense, for the pro-evolution side

successfully made their opponents into objects of national ridicule.

Scopes was found guilty, however, and fined. The teaching of

evolution was soft-pedalled in high-school biology and geology texts

for decades thereafter.

A second resurgence of creationist sentiment took place in the

1960s, when the advent of Sputnik forced a reassessment of

American science education. Fearful of falling behind the Soviets in

science and technology, the federal National Science Foundation

commissioned the production of state-of-the-art biology texts in

1963. These texts were fiercely resisted by local religious groups

who considered them tantamount to state-supported promotion of

atheism.

The early 1980s saw a change of tactics as fundamentalist

activists sought equal time in the classroom for creation-science -- in

other words, a formal acknowledgement from the government that

their world-view was as legitimate as that of "secular humanism."

Fierce legal struggles in 1982, 1985 and 1987 saw the defeat of this

tactic in state courts and the Supreme Court.

This legal defeat has by no means put an end to creation-

science. Creation advocates have merely gone underground, no

longer challenging the scientific authorities directly on their own

ground, or the legal ground of the courts, but concentrating on grass-

roots organization. Creation scientists find their messages received

with attention and gratitude all over the Christian world.

Creation-science may seem bizarre, but it is no more irrational

than many other brands of cult archeology that find ready adherents

everywhere. All over the USA, people believe in ancient astronauts,

the lost continents of Mu, Lemuria or Atlantis, the shroud of Turin,

the curse of King Tut. They believe in pyramid power, Velikovskian

catastrophism, psychic archeology, and dowsing for relics. They

believe that America was the cradle of the human race, and that

PreColumbian America was visited by Celts, Phoenicians, Egyptians,

Romans, and various lost tribes of Israel. In the high-tech 1990s, in

the midst of headlong scientific advance, people believe in all sorts of

odd things. People believe in crystals and telepathy and astrology

and reincarnation, in ouija boards and the evil eye and UFOs.

People don't believe these things because they are reasonable.

They believe them because these beliefs make them feel better.

They believe them because they are sick of believing in conventional

modernism with its vast corporate institutions, its secularism, its

ruthless consumerism and its unrelenting reliance on the cold

intelligence of technical expertise and instrumental rationality.

They believe these odd things because they don't trust what they are

told by their society's authority figures. They don't believe that

what is happening to our society is good for them, or in their

interests as human beings.

The clash of world views inherent in creation-science has

mostly taken place in the United States. It has been an ugly clash in

some ways, but it has rarely been violent. Western society has had a

hundred and forty years to get used to Darwin. Many of the

sternest opponents of creation-science have in fact been orthodox

American Christian theologians and church officials, wary of a

breakdown in traditional American relations of church and state.

It may be that the most determined backlash will come not

from Christian fundamentalists, but from the legions of other

fundamentalist movements now rising like deep-rooted mushrooms

around the planet: from Moslem radicals both Sunni and Shi'ite, from

Hindu groups like Vedic Truth and Hindu Nation, from militant

Sikhs, militant Theravada Buddhists, or from a formerly communist

world eager to embrace half-forgotten orthodoxies. What loyalty do

these people owe to the methods of trained investigation that made

the West powerful and rich?

Scientists believe in rationality and objectivity -- even though

rationality and objectivity are far from common human attributes,

and no human being practices these qualities flawlessly. As it

happens, the scientific enterprise in Western society currently serves

the political and economic interests of scientists as human beings.

As a social group in Western society, scientists have successfully

identified themselves with the practice of rational and objective

inquiry, but this situation need not go on indefinitely. How would

scientists themselves react if their admiration for reason came into

direct conflict with their human institutions, human community, and

human interests?

One wonders how scientists would react if truly rational, truly

objective, truly nonhuman Artificial Intelligences were winning all

the tenure, all the federal grants, and all the Nobels. Suppose that

scientists suddenly found themselves robbed of cultural authority,

their halting efforts to understand made the object of public ridicule

in comparison to the sublime efforts of a new power group --

superbly rational computers. Would the qualities of objectivity and

rationality still receive such acclaim from scientists? Perhaps we

would suddenly hear a great deal from scientists about the

transcendant values of intuition, inspiration, spiritual understanding

and deep human compassion. We might see scientists organizing to

assure that the Pursuit of Truth should slow down enough for them

to keep up. We might perhaps see scientists struggling with mixed

success to keep Artificial Intelligence out of the schoolrooms. We

might see scientists stricken with fear that their own children were

becoming strangers to them, losing all morality and humanity as they

transferred their tender young brains into cool new racks of silicon

ultra-rationality -- all in the name of progress.

But this isn't science. This is only bizarre speculation.

For Further Reading:

THE CREATIONISTS by Ronald L. Numbers (Alfred A. Knopf, 1992).

Sympathetic but unsparing history of Creationism as movement and

doctrine.

THE GENESIS FLOOD: The Biblical Record and its Scientific

Implications by John C. Whitcomb and Henry M. Morris (Presbyterian

and Reformed Publishing Company, 1961). Best-known and most

often-cited Creationist text.

MANY INFALLIBLE PROOFS: Practical and Useful Evidences of

Christianity by Henry M. Morris (CLP Publishers, 1974). Dr Morris

goes beyond flood geology to offer evidence for Christ's virgin birth,

the physical transmutation of Lot's wife into a pillar of salt, etc.

CATALOG of the Institute for Creation Research (P O Box 2667, El

Cajon, CA 92021). Free catalog listing dozens of Creationist

publications.

CULT ARCHAEOLOGY AND CREATIONISM: Understanding

Pseudoscientific Beliefs About the Past edited by Francis B. Harrold

and Raymond A. Eve (University of Iowa Press, 1987). Indignant

social scientists tie into highly nonconventional beliefs about the

past.


Document Info


Accesari: 14135
Apreciat: hand-up

Comenteaza documentul:

Nu esti inregistrat
Trebuie sa fii utilizator inregistrat pentru a putea comenta


Creaza cont nou

A fost util?

Daca documentul a fost util si crezi ca merita
sa adaugi un link catre el la tine in site


in pagina web a site-ului tau.




eCoduri.com - coduri postale, contabile, CAEN sau bancare

Politica de confidentialitate | Termenii si conditii de utilizare




Copyright © Contact (SCRIGROUP Int. 2024 )