Pelican books take flight again | Books | The Guardian

Pelican books take flight again | Books | The Guardian

2. The Common Reader by Virginia Woolf (1938)
A collection of essays on literary subjects. The “common reader … differs from the critic and the scholar … He reads for his own pleasure rather than to impart knowledge.” The Second Common Reader soon followed.

3. An Outline of European Architecture by Nikolaus Pevsner (1943)
Pevsner was one of Allen Lane’s best signings (the publisher gave the green light to The Buildings of England series). Pevsner was responsible for the magisterial Pelican History of Art series; An Outline sold half a million copies.

5. The Pelican Guide to English Literature, edited by Boris Ford (from 1954)
Ford was a Leavisite, but Leavis apparently wasn’t best pleased with this spreading of the word to the masses (contributors included TS Eliot, Lionel Trilling and Geoffrey Grigson). But it was a great if always controversial success. One volume, The Age of Chaucer, alone sold 560,000 copies.

6. The Uses of Literacy by Richard Hoggart (1958)
The text on the original Pelican cover reads: “A vivid and detached analysis of the assumptions, attitudes and morals of working-class people in northern England, and the way in which magazines, films and other mass media are likely to influence them.”

 

Pelican books take flight again | Books | The Guardian

Pelican books take flight again | Books | The Guardian

As the non-fiction Penguin imprint relaunches, Paul Laity tells the story of the blue‑spined books that inspired generations of self-improvers – and transformed the publishing world.

..

“The really amazing thing, the extraordinary eye-opener that surprised the most optimistic of us, was the immediate and overwhelming success of the Pelicans.” So wrote Allen Lane, founder of Penguin and architect of the paperback revolution, who had transformed the publishing world by selling quality books for the price of a packet of cigarettes.

Millions of orange Penguins had already been bought when they were joined in 1937 by the pale blue non-fiction Pelicans. “Who would have imagined,” he continued, “that, even at 6d, there was a thirsty public anxious to buy thousands of copies of books on science, sociology, economics, archaeology, astronomy and other equally serious subjects?”

His instinct was not only commercially astute but democratic. The launching of the Penguins and Pelicans (“Good books cheap”) caused a huge fuss, and not simply among staid publishers: the masses were now able to buy not just pulp, but “improving”, high-calibre books – whatever next! Lane and his defenders argued that owning such books should not be the preserve of the privileged class. He had no truck with those people “who despair at what they regard as the low level of people’s intelligence”.

Lane came up with the name – so the story goes – when he heard someone who wanted to buy a Penguin at a King’s Cross station bookstall mistakenly ask for “one of those Pelican books”. He acted fast to create a new imprint. The first Pelican was George Bernard Shaw‘s The Intelligent Woman’s Guide to Socialism, Capitalism, Sovietism and Fascism. “A sixpenny edition” of the book, the author modestly suggested, “would be the salvation of mankind.” Such was the demand that booksellers had to travel to the Penguin stockroom in taxis and fill them up with copies before rushing back to their shops.

It helped of course that this was a decade of national and world crisis. For Lane, the public “wanted a solid background to give some coherence to the newspaper’s scintillating confusion of day-to-day events”.

Unicode Consortium

Unicode Consortium

The Unicode Consortium enables people around the world to use computers in any language. Our freely-available specifications and data form the foundation for software internationalization in all major operating systems, search engines, applications, and the World Wide Web.

Fundamentally, computers just deal with numbers. They store letters and other characters by assigning a number for each one. Before Unicode was invented, there were hundreds of different encoding systems for assigning these numbers. No single encoding could contain enough characters: for example, the European Union alone requires several different encodings to cover all its languages. Even for a single language like English no single encoding was adequate for all the letters, punctuation, and technical symbols in common use.

These encoding systems also conflict with one another. That is, two encodings can use the same number for two different characters, or use different numbers for the same character. Any given computer (especially servers) needs to support many different encodings; yet whenever data is passed between different encodings or platforms, that data always runs the risk of corruption.

Unicode is changing all that!

Unicode provides a unique number for every character, no matter what the platform, no matter what the program, no matter what the language.

The Unicode Standard has been adopted by such industry leaders as Apple, HP, IBM, JustSystems, Microsoft, Oracle, SAP, Sun, Sybase, Unisys and many others. Unicode is required by modern standards such as XML, Java, ECMAScript (JavaScript), LDAP, CORBA 3.0, WML, etc., and is the official way to implement ISO/IEC 10646. It is supported in many operating systems, all modern browsers, and many other products. The emergence of the Unicode Standard, and the availability of tools supporting it, are among the most significant recent global software technology trends.

Incorporating Unicode into client-server or multi-tiered applications and websites offers significant cost savings over the use of legacy character sets. Unicode enables a single software product or a single website to be targeted across multiple platforms, languages and countries without re-engineering. It allows data to be transported through many different systems without corruption.

About the Unicode Consortium

The Unicode Consortium was founded to develop, extend and promote use of the Unicode Standard, which specifies the representation of text in modern software products and standards. The Consortium is a non-profit, 501(c)(3)charitable organization. The membership of the Consortium represents a broad spectrum of corporations and organizations in the computer and information processing industry. The Consortium is supported financially through membership dues and donations. Membership in the Unicode Consortium is open to organizations and individuals anywhere in the world who support the Unicode Standard and wish to assist in its extension and implementation. All are invited to contribute to the support of the Consortium’s important work by making a donation.

For more information, see the Glossary, Technical Introduction and Useful Resources.

Investigating the algorithms that govern our lives – Columbia Journalism Review

Investigating the algorithms that govern our lives – Columbia Journalism Review

Just an old-school style investigative look into technology, data, algorithms and humanity.

As online users, we’ve become accustomed to the giant, invisible hands of Google, Facebook, and Amazon feeding our screens. We’re surrounded by proprietary code like Twitter Trends, Google’s autocomplete, Netflix recommendations, and OKCupid matches. It’s how the internet churns. So when Instagram or Twitter, or the Silicon Valley titan of the moment, chooses to mess with what we consider our personal lives, we’re reminded where the power actually lies. And it rankles.

While internet users may be resigned to these algorithmic overlords, journalists can’t be. Algorithms have everything journalists are hardwired to question: They’re powerful, secret, and governing essential parts of society. Algorithms decide how fast Uber gets to you, whether you’re approved for a loan, whether a prisoner gets parole, who the police should monitor, and who the TSA should frisk.

Algorithms are built to approximate the world in a way that accommodates the purposes of their architect, and “embed a series of assumptions about how the world works and how the world should work,” says Hansen.

It’s up to journalists to investigate those assumptions, and their consequences, especially where they intersect with policy. The first step is extending classic journalism skills into a nascent domain: questioning systems of power, and employing experts to unpack what we don’t know. But when it comes to algorithms that can compute what the human mind can’t, that won’t be enough. Journalists who want to report on algorithms must expand their literacy into the areas of computing and data, in order to be equipped to deal with the ever-more-complex algorithms governing our lives.

The reporting so far

Few newsrooms consider algorithms a beat of their own, but some have already begun this type of reporting.

Algorithms can generally be broken down into three parts: the data that goes in; the “black box,” or the actual algorithmic process; and the outcome, or the value that gets spit out, be it a prediction or score or price. Reporting on algorithms can be done at any of the three stages, by analyzing the data that goes in, evaluating the data that comes out, or reviewing the architecture of the algorithm itself to see how it reaches its judgements.

Currently, the majority of reporting on algorithms is done by looking at the outcomes and attempting to reverse-engineer the algorithm, applying similar techniques as are used in data journalism. The Wall Street Journal used this technique to find that Staples’ online prices were determined by the customer’s distance from a competitor’s store, leaving prices higher in rural areas. And FiveThirtyEight used the method to skewer Fandango’s movie ratings—which skewed abnormally high, rarely dipping below 3 stars—while a ProPublica analysis suggested that Uber’s surge pricing increases cost but not the supply of drivers.

 

….

Can an algorithm be racist?

“Algorithms are like a very small child,” says Suresh Venkatasubramanian. “They learn from their environment.”

Venkatasubramanian is a computer science professor at the University of Utah. He’s someone who thinks about algorithmic fairness, ever since he read a short story by Cory Doctorow published in 2006, called “Human Readable.” The story takes place in a future world, similar to ours, but in which all national infrastructure (traffic, email, the media, etc.) is run by “centralized emergent networks,” modeled after ant colonies. Or in other words: a network of algorithms. The plot revolves around two lovers: a network engineer who is certain the system is incorruptible, and a lawyer who knows it’s already been corrupted.

“It got me thinking,” says Venkatasubramanian. “What happens if we live in a world that is totally driven by algorithms?”

He’s not the only one asking that question. Algorithmic accountability is a growing discipline across a number of fields. Computer scientists, legal scholars, and policy wonks are all grappling with ways to identify or prevent bias in algorithms, along with the best ways to establish standards for accountability in business and government. A big part of the concern is whether (and how) algorithms reinforce or amplify bias against minority groups.

Algorithmic accountability builds on the existing body of law and policy aimed at combatting discrimination in housing, employment, admissions, and the like, and applies the notion of disparate impact, which looks at the impact of a policy on protected classes rather than itsintention. What that means for algorithms is that it doesn’t have to be intentionally racist to have racist consequences.

Algorithms can be especially susceptible to perpetuating bias for two reasons. First, algorithms can encode human bias, whether intentionally or otherwise. This happens by using historical data or classifiers that reflect bias (such as labeling gay households separately, etc.). This is especially true for machine-learning algorithms that learn from users’ input. For example, researchers at Carnegie Mellon University found that women were receiving ads for lower-paying jobson Google’s ad network but weren’t sure why. It was possible, they wrote, that if more women tended to click on lower-paying ads, the algorithm would learn from that behavior, continuing the pattern.

Second, algorithms have some inherently unfair design tics—many of which are laid out in a Medium post, “How big data is unfair.” The author points out that since algorithms look for patterns, and minorities by definition don’t fit the same patterns as the majority, the results will be different for members of the minority group. And if the overall success rate of the algorithm is pretty high, it might not be noticeable that the people it isn’t working for all belong to a similar group.

To rectify this, Venkatasubramanian, along with several colleagues, wrote a paper on how computer scientists can test for bias mathematically while designing algorithms, the same way they’d check for accuracy or error rates in other data projects. He’s also building a tool for non-computer scientists, based on the same statistical principles, which scores uploaded data with a “fairness measure.” Although the tool can’t check if an algorithm itself is fair, it can at least make sure the data you’re feeding it is. Most algorithms learn from input data, Venkatasubramanian explains, so that’s the first place to check for bias.

Much of the reporting on algorithms thus far has focused on their impact on marginalized groups. ProPublica’s story on The Princeton Review, called “The Tiger-Mom Tax,” found that Asian families were almost twice as likely to be quoted the highest of three possible prices for an SAT tutoring course, and that income alone didn’t account for the pricing scheme. A team of journalism students at the University of Maryland, meanwhile, found that Uber wait times were longer in non-white areas in DC.

Bias is also the one of the biggest concerns with predictive policing software like PredPol, which helps police allocate resources by identifying patterns in past crime data and predicting where a crime is likely to happen. The major question, says Maurice Chammah, a journalist at The Marshall Project who reported on predictive policing, is whether it will just lead to more policing for minorities. “There was a worry that if you just took the data on arrests and put it into an algorithm,” he says, “the algorithm would keep sending you back to minority communities.”

rafat.org • The End of Scale

rafat.org • The End of Scale

You could sense it when ebooks and ereaders peaked in 2015. You could even sense it when we all found out no one was using millions of dedicated apps beyond the handful that they really use on a daily basis. Hell, you could sense it when iPad didn’t turn out to be the savior of media, which now feels like eons ago.

Who were we trying to fool?

Therein comes the biggest lie in all this, now exposed: There is no secret sauce in media.

There is no outside savior coming to rescue.

It is all you. The value you build with your editorial. The value you can create by being focused on doing a few things very, very well.

The relationship you build with your dedicated users, direct, tangible and non-disposable. Creating and holding to your own core while everyone else run themselves to exhaustion. By stepping away from the churn.

By creating unique residents, not unique visitors. By creating something people want to come to, deliberately, again and again, and stay. Now that’s a freakin’ novel idea, isn’t it?

Internet Society | Internet Issues, Technology, Standards, Policy, Leadership – Global_Internet_Report_2014.pdf

Internet Society | Internet Issues, Technology, Standards, Policy, Leadership – Global_Internet_Report_2014.pdf

More than two decades ago, the Internet Society was formed to support the open development, evolution, and use of the Internet for the benefit of all mankind. Over the years, we have pursued that task with pride. We continue to be driven by the hope and promise of the benefits the Internet can bring to everyone.

In doing so, the Internet Society has fostered a diverse and truly global community. Internet Society Chapters and members represent the people of the world and the many and varied ways they use the Internet to enrich the lives of themselves and their peers. They use the Internet to create communities, to open new economic possibilities, to improv lives, and to participate in the world. We are inspired by their stories of innovation, creativity, and collaboration.

Thanks to the Internet’s own success, we are now in an increasingly complex era where the stakes are much higher than before, and potential threats to the Internet’s core principles loom larger. To protect your ability to use the Internet for your needs – to keep it open and sustainable – we must do more to measure impacts and present the strengths of the open Internet model in more compelling ways, to convince policy makers, influencers, and the general public of the importance of our mission.

To this end, I am pleased to launch this, the first in an annual series of Global Internet Reports. With this report, the Internet Society introduces a new level of integrated analysis, measurement, and reporting to Internet governance discussions at all levels.

The Global Internet Reports will become a showcase of topics that are at the heart of the Internet Society’s work about the future of the Internet, weaving together the many threads of the diverse multistakeholder Internet community.

The Program Era – bookforum.com / current issue

The Program Era – bookforum.com / current issue

A cultural history looks at how word processing changed the way we write.

I can’t remember the last time I used an electric typewriter. It most likely would have been in the course of typing out an address on an envelope—but then again, I can’t readily call to mind the last time I did that with anything other than that old-fashioned technology, the ballpoint pen, which itself is not really all that old school.

The mass commercial distribution of the ballpoint pen in the United States dates only to about 1945, which means its triumphal appearance in the writing market occurred just under twenty years before that of the Magnetic Tape Selectric Typewriter, IBM’s radically rethought typewriting device. Released in 1964, the MT/ST was the first machine of its kind, equipped with a magnetic-tape memory component that allowed you to edit text before it was actually printed on the page. Corporations were considered the primary beneficiaries of the new technology, a wrinkle on the electric typewriter that arrived with considerable media enthusiasm.

The makers of the MT/ST saw the contemporary office groaning under the weight of metastasizing paperwork and envisioned making money off companies hoping to streamline the costs of secretarial labor and increase productivity. Writers were something of an afterthought: Whatever effect IBM’s product would have on authors—high or low, commercial or experimental—was collateral.

But if the introduction of a new type of word-processing machine started a slow-burning revolution in how writers went about their business, it was a revolution nonetheless, drastically altering how authors did their work. The primary focus of Matthew G. Kirschenbaum’s new history of word processing, Track Changes, is a twenty-year span, from the moment that IBM brought out the MT/ST until 1984, when the Apple Macintosh first offered a glimpse of an unchained future with its televised appeal to a nation of would-be Winston Smiths. (As with Bobby Thomson’s home run, millions still claim they remember exactly where they were when they saw Ridley Scott’s celebrated commercial for Apple during the third quarter of an otherwise forgettable Los Angeles Raiders Super Bowl blowout.)

The word-processing program that the new Mac included, MacWrite, was fairly primitive—it couldn’t handle documents longer than eight pages, a boon only to the briefest of short-story writers—and it would take years for the mousy point-and-click innovations to knock off the market Goliaths of WordStar and WordPerfect. But the timing of Apple’s campaign couldn’t have been better. “In 1978 or 1979,” Kirschenbaum notes, “writers using a word processor or a personal computer were part of the vanguard. By 1983 or 1984 they were merely part of the zeitgeist.”

As Kirschenbaum’s history reminds us, the story of personal computers supplanting older systems dedicated to word processing—and writers’ larger commitment to abandoning pens and ink and typewriter ribbons and correction fluid—was hardly the fait accompli that we sometimes think it was. His book attempts a full literary history of this shift. To do so, he ranges across a number of phenomena: the technical and managerial prehistories of the word-processing revolution; the imaginative, sometimes allegorical literary responses to how work was managed (from Stanislaw Lem’s 1971 “U-Write-It,” which fantasized a fully automated literary production line, to John Updike’s 1983 poem “INVALID.KEYSTROKE,” a sort of ode to the little dot that appeared on the screen between words in early word processors like his own Wangwriter II); and most prominently, how word processing both tapped into and reflected writers’ anxieties about their whole enterprise.

The last didn’t appear with the first wizardly word processor or dazzling software program, and it hasn’t gone away. What Kirschenbaum doesn’t do is reflect on how the “program era” affected authors’ sentence structure, book length, and the like. Track Changes is less concerned with big data than with bit-by-bit change.