Why Flash Drives Are Still Everywhere – The Atlantic

Why Flash Drives Are Still Everywhere – The Atlantic

At different moments, different unremarkable technical objects seem to evoke that same feeling: that one can’t have too many. These days, the things that seem to turn up all over the place—lurking in pockets of different bags, filling drawers, and junk boxes, dropped down the back of desks—are USB flash drives.

They’re everywhere. There is almost certainly one within ten feet of you right now. I seem to acquire them unceasingly—they’re handed out as promotional tchotchkes, used to provide meeting minutes and conference proceedings, and presented in all sorts of shapes, sizes, and configurations. They have become inescapable elements of the contemporary technological landscape.

The second irony, given how overwhelming the speedy pace of technological advancement can feel, is how primitive the technology on which USB flash drives rely actually is. The challenge posed by the flash drive is to find a way to work seamlessly and easily with every computer. Its solution is a technology known as the “FAT filesystem,” a system—named for its primary data structure, the File Allocation Table—that was developed as a means to manage early floppy disk storage units. Pretty much the simplest imaginable mechanism for representing data on a disk, it was speedily developed and deployed in Microsoft’s almost-ubiquitous BASIC programming system in 1977.

Although it has long since been displaced by more advanced technologies, those other technologies have frequently incorporated a version of FAT into their DNA. Some version of that same FAT filesystem has lived on, locked away inside the more advanced systems that allow for the use of today’s much larger, speedier storage technologies. When people rely upon the FAT filesystem, they’re plugging into an evolutionary throwback, like some kind of vestigial tail. It’s the lizard brain of your computer.

The flash drive exposes the great lie of technological progress, which is the idea that things are ever really left behind. It’s not just that an obsolete technology from the year of Saturday Night Fever still lurks unseen in the dank corners of a shiny new MacBook; it’s that it’s something that is relied upon regularly. The technology historian Thomas Hughes calls these types of devices “reverse salients”—those things that interrupt and disturb the forward movement of technology. They reveal the ugly truth that lies behind each slick new presentation from Google, Apple, or Microsoft: Technical systems are cobbled together from left-over pieces, digital Frankenstein’s monsters in which spare parts and leftovers are awkwardly sutured together and pressed into service. It turns out that the emblems of the technological future are much more awkwardly bound to the past than it’s comfortable to admit.

mols on Twitter: “The term “Web Engineer” intrigues. “Engineer” implies: Scientific method, creation, invention, systems, social apps, design, visualization.”

Digital pioneer, Jaron Lanier, on the dangers of “free” online culture

Digital pioneer, Jaron Lanier, on the dangers of “free” online culture

Jaron Lanier, a keynote speaker at the WIPO Conference on the Global Digital Content Market from April 20 to 22, 2016, is a Silicon Valley insider, a virtual reality pioneer and one of the most celebrated technology writers in the world. But he is increasingly concerned about today’s online universe. He explains why and what it will take to turn things around.

What are your main concerns about the digital market today?

We have seen an implosion of careers and career opportunities for those who have devoted their lives to cultural expression, but we create a cultural mythology that this hasn’t happened. Like gamblers at a casino, many young people believe they may be the one to make it on YouTube, Kickstarter or some other platform. But these opportunities are rare compared to the old-fashioned middle-class jobs that existed in great numbers around things like writing, photography, recorded music and many other creative pursuits.

Economically, the digital revolution has not been such a good thing. Take the case of professional translators. Their career opportunities have been decreasing much like those of recorded musicians, journalists, authors and photographers. The decimation started with the widespread Internet and is continuing apace. But interestingly, for professional translators the decrease is related to the rise of machine translation.

Automated translations are mash-ups of real-life translations. We scrape the translations made by real people millions of times a day to keep example databases up to date with current events and slang. Elements of these phrases are then regurgitated into usable machine translations. There is nothing wrong with that system. It’s useful, so why not? But the problem is we are not paying the people whose data we are taking to make these translations possible. Some might call this fraud.

All these systems that throw people out of work create an illusion that a machine is doing the work, but in reality they are actually taking data from people – we call it big data – to make the work possible. If we found a way to start paying people for their actual valuable contributions to these big computer resources, we could avoid the employment crisis that otherwise we will create.

So what needs to be done to ensure a sustainable digital economy?

The obvious starting point is to pay people for information that is valuable and that comes from them. I don’t claim to have all the answers, but the basics are simple and I am sure it can be done.

Some sort of imposed socialist system where everybody is the same would be ruinous. We should expect some degree of variation. But right now a handful of people – those inheriting traditional monopolies like oil and the increasingly powerful big computer networks – have a giant chunk of the world’s wealth and it’s having a destabilizing impact. While an oil monopoly might control the oil, it won’t take over everything in your life, but information does, especially with greater automation.

If we expect computers to pilot cars and operate factories, the employment that is left should be the creative stuff, the expression, the IP. But if we undermine that, we are creating an employment crisis of mass proportions.

That’s where IP comes in. The general principle that we pay people for their information and contributions is critical if we want people to live with dignity as machines get better.

But IP needs to be made much more sophisticated and granular. It needs to be something that benefits everybody – as commonplace as having pennies in your pocket.

It is the only future that gives people dignity as the machines get better.

IP is a crucial thread in designing a humane future with dignity.

How would you like to see the digital landscape evolve?

I would like to see more systems where ordinary people can get paid when they contribute value to digital networks; systems that improve their lives and expand the overall economy.

Economic stability occurs when you have a bell curve, with a few super-rich people and a few poor people but most people somewhere in the middle. At present, we have a winner-takes-all situation where a few do really well and everybody else falls into a sea of wannabees who never quite make it. That’s not sustainable.

You are supporting the Conference on the Global Digital Content Market that WIPO is hosting. Why is that?

IP is a crucial thread in designing a humane future with dignity.  Not everybody can be a Zuckerberg or run a tech company, but everybody could – or at least a critically large number of people could – benefit from IP.

IP offers a path to the future that will bring dignity and livelihood to large numbers of people. This is our best shot at it.

Who are your heroes and why?

There are many, but they include:

  • J.M. Keynes,  he was the first person to think about how to really manage an information system.
  • E.M. Forster for The Machine Stops, written in 1907, which foresees our error with a very critical eye.
  • Alan Turing, who stayed a kind person even as he was tortured to death.
  • Mary Shelley who was a keen observer of people and how they can confuse themselves with technology.

And of course my friend Ted Nelson. He invented the digital media link and was perhaps the most formative figure in the development of online culture. He proposed that instead of copying digital media, we should keep one copy of each cultural expression on a digital network and pay the author of that expression an affordable amount whenever it is accessed. In this way, anyone could earn a living from their creative work.

What is your next book about?

Dawn of the New Everything: First Encounters with Reality and Virtual Reality is a memoir and an introduction to virtual reality. It will be out soon.

Pelican books take flight again | Books | The Guardian

Pelican books take flight again | Books | The Guardian

The fashionability of Pelicans, which lasted at least into the 70s, was connected to this breaking open of radical new ideas to public understanding – not in academic jargon but in clearly expressed prose. But it was also because they looked so good. The first Pelicans were, like the Penguins, beneficiaries of the 30s passion for design. They had the iconic triband covers conceived by Edward Young – in Lane’s words, “a bright splash of fat colour” with a white band running horizontally across the centre for displaying author and title in Gill Sans. A pelican appeared flying on the cover and standing on the spine. After the war, Lane employed as a designer the incomparable Jan Tschichold, a one-time associate of the Bauhaus and known for his Weimar film posters. His Pelicans had a central white panel framed by a blue border containing the name of the imprint on each side.

In the 60s the books changed again, to the illustrative covers designed by Germano Facetti, art director from 1961 to 72. Facetti, a survivor of Mauthausen labour camp who had worked in Milan as a typographer and in Paris as an interior designer, transformed the Penguin image, as John Walsh has written, “from linear severity and puritanical simplicity into a series of pictorial coups”. The 60s covers by Facetti (eg The Stagnant Society by Michael Shanks), and by the designers he took on – Jock Kennier (eg Alex Comfort’s Sex in Society), Derek Birdsall (eg The Naked Society) – are ingenious, arresting invitations to a world of new thinking.

Jenny Diski has written of subscribing in the 60s to “the unofficial University of Pelican Books course”, which was all about “gathering information and ideas about the world. Month by month, titles came out by Laing and Esterson, Willmott and Young, JK Galbraith, Maynard Smith, Martin Gardner, Richard Leakey, Margaret Mead; psychoanalysts, sociologists, economists, mathematicians, historians, physicists, biologists and literary critics, each offering their latest thinking for an unspecialised public, and the blue spines on the pile of books on the floor of the bedsit increased.”

“If you weren’t at university studying a particular discipline (and even if you were),” she goes on, “Pelican books were the way to get the gist of things, and education seemed like a capacious bag into which all manner of information was thrown, without the slightest concern about where it belonged in the taxonomy of knowledge. Anti-psychiatry, social welfare, economics, politics, the sexual behaviour of young Melanesians, the history of science, the anatomy of this, that and the other, the affluent, naked and stagnant society in which we found ourselves.”

Owen Hatherley has described the Pelicans of the late 60s as “human emancipation through mass production … hot-off-the-press accounts of the ‘new French revolution’ would go alongside texts on scientific management, with Herbert Marcuse next to Fanon, next to AJP Taylor, and all of this conflicting and intoxicating information in a pocket-sized form, on cheap paper and with impeccably elegant modernist covers.”

The Program Era – bookforum.com / current issue

The Program Era – bookforum.com / current issue

A cultural history looks at how word processing changed the way we write.

I can’t remember the last time I used an electric typewriter. It most likely would have been in the course of typing out an address on an envelope—but then again, I can’t readily call to mind the last time I did that with anything other than that old-fashioned technology, the ballpoint pen, which itself is not really all that old school.

The mass commercial distribution of the ballpoint pen in the United States dates only to about 1945, which means its triumphal appearance in the writing market occurred just under twenty years before that of the Magnetic Tape Selectric Typewriter, IBM’s radically rethought typewriting device. Released in 1964, the MT/ST was the first machine of its kind, equipped with a magnetic-tape memory component that allowed you to edit text before it was actually printed on the page. Corporations were considered the primary beneficiaries of the new technology, a wrinkle on the electric typewriter that arrived with considerable media enthusiasm.

The makers of the MT/ST saw the contemporary office groaning under the weight of metastasizing paperwork and envisioned making money off companies hoping to streamline the costs of secretarial labor and increase productivity. Writers were something of an afterthought: Whatever effect IBM’s product would have on authors—high or low, commercial or experimental—was collateral.

But if the introduction of a new type of word-processing machine started a slow-burning revolution in how writers went about their business, it was a revolution nonetheless, drastically altering how authors did their work. The primary focus of Matthew G. Kirschenbaum’s new history of word processing, Track Changes, is a twenty-year span, from the moment that IBM brought out the MT/ST until 1984, when the Apple Macintosh first offered a glimpse of an unchained future with its televised appeal to a nation of would-be Winston Smiths. (As with Bobby Thomson’s home run, millions still claim they remember exactly where they were when they saw Ridley Scott’s celebrated commercial for Apple during the third quarter of an otherwise forgettable Los Angeles Raiders Super Bowl blowout.)

The word-processing program that the new Mac included, MacWrite, was fairly primitive—it couldn’t handle documents longer than eight pages, a boon only to the briefest of short-story writers—and it would take years for the mousy point-and-click innovations to knock off the market Goliaths of WordStar and WordPerfect. But the timing of Apple’s campaign couldn’t have been better. “In 1978 or 1979,” Kirschenbaum notes, “writers using a word processor or a personal computer were part of the vanguard. By 1983 or 1984 they were merely part of the zeitgeist.”

As Kirschenbaum’s history reminds us, the story of personal computers supplanting older systems dedicated to word processing—and writers’ larger commitment to abandoning pens and ink and typewriter ribbons and correction fluid—was hardly the fait accompli that we sometimes think it was. His book attempts a full literary history of this shift. To do so, he ranges across a number of phenomena: the technical and managerial prehistories of the word-processing revolution; the imaginative, sometimes allegorical literary responses to how work was managed (from Stanislaw Lem’s 1971 “U-Write-It,” which fantasized a fully automated literary production line, to John Updike’s 1983 poem “INVALID.KEYSTROKE,” a sort of ode to the little dot that appeared on the screen between words in early word processors like his own Wangwriter II); and most prominently, how word processing both tapped into and reflected writers’ anxieties about their whole enterprise.

The last didn’t appear with the first wizardly word processor or dazzling software program, and it hasn’t gone away. What Kirschenbaum doesn’t do is reflect on how the “program era” affected authors’ sentence structure, book length, and the like. Track Changes is less concerned with big data than with bit-by-bit change.

Merriam-Webster Unabridged

Merriam-Webster Unabridged

Merriam-Webster Unabridged is the largest, most comprehensive American dictionary currently available in print or online. It is built on the solid foundation of Webster’s Third New International Dictionary, Unabridged and is the best source of current information about the English language.

We are actively engaged in creating an entirely new edition of the Unabridged, and new and revised entries and usage content will be added to the site on a continuing basis.

Merriam-Webster Unabridged includes rich, clear definitions and more usage information than ever before. Definitions have been enhanced with over 123,000 author quotations. Supplementary notes provide additional context, and usage paragraphs offer clear guidance and suggestions for words with confused or disputed usage. Dates of first known use are being added, and editorial style changes are being made throughout the dictionary to make entries more readable and easier to understand.
Continue reading “Merriam-Webster Unabridged”

The True Story of the Backward Index (Video) | Merriam-Webster

The True Story of the Backward Index (Video) | Merriam-Webster

There it sits, hidden in plain view on a set of shelves in the basement of the Merriam-Webster offices: the Backward Index. But why would anyone type out 315,000 words spelled in reverse?