IBM 704 – Wikipedia, the free encyclopedia

IBM 704 – Wikipedia, the free encyclopedia

The programming languages FORTRAN[5] and LISP[6] were first developed for the 704.

MUSIC, the first computer music program, was developed on the IBM 704 by Max Mathews.

In 1962 physicist John Larry Kelly, Jr created one of the most famous moments in the history ofBell Labs by using an IBM 704 computer to synthesize speech. Kelly’s voice recorder synthesizervocoder recreated the song Daisy Bell, with musical accompaniment from Max Mathews. Arthur C. Clarke was coincidentally visiting friend and colleague John Pierce at the Bell Labs Murray Hill facility at the time of this speech synthesis demonstration, and Clarke was so impressed that six years later he used it in the climactic scene of his novel and screenplay for 2001: A Space Odyssey,[7] where the HAL 9000 computer sings the same song.[8][contradictory]

Edward O. Thorp, a math instructor at MIT, used the IBM 704 as a research tool to investigate the probabilities of winning while developing his blackjack gaming theory.[9][10] He used FORTRAN to formulate the equations of his research model.

The IBM 704 was used as the official tracker for the Smithsonian Astrophysical ObservatoryOperation Moonwatch in the fall of 1957. See The M.I.T. Computation Center and Operation Moonwatch. IBM provided four staff scientists to aid Smithsonian Astrophysical Observatoryscientists and mathematicians in the calculation of satellite orbits: Dr. Giampiero Rossoni, Dr. John Greenstadt, Thomas Apple and Richard Hatch.

Intro To Computational Linguistics

Intro To Computational Linguistics

Machine Translation

At the end of the 1950s, researchers in the United States, Russia, and Western Europe were confident that high-quality machine translation (MT) of scientific and technical documents would be possible within a very few years. After the promise had remained unrealized for a decade, the National Academy of Sciences of the United States published the much cited but little read report of its Automatic Language Processing Advisory Committee. The ALPAC Report recommended

that the resources that were being expended on MT as a solution to immediate practical problems should be redirected towards more fundamental questions of language processing that would have to be answered before any translation machine could be built. The number of laboratories working in the field was sharply reduced all over the world, and few of them were able to obtain funding for more long-range research programs in what then came to be known as computational linguistics.

There was a resurgence of interest in machine translation in the 1980s and, although the approaches adopted differed little from those of the 1960s, many of the efforts, notably in Japan, were rapidly deemed successful. This seems to have had less to do with advances in linguistics and software technology or with the greater size and speed of computers than with a better appreciation of special situations where ingenuity might make a limited success of rudimentary MT. The most conspicuous example was the METEO system, developed at the University of Montreal, which has long provided the French translations of the weather reports used by airlines, shipping companies, and others. Some manufacturers of machinery have found it possible to translate maintenance manuals used within their organizations (not by their customers) largely automatically by having the technical writers use only certain words and only in carefully prescribed ways.

Why Machine Translation Is Hard

Many factors contribute to the difficulty of machine translation, including words with multiple meanings, sentences with multiple grammatical structures, uncertainty about what a pronoun refers to, and other problems of grammar. But two common misunderstandings make translation seem altogether simpler than it is. First, translation is not primarily a linguistic operation, and second, translation is not an operation that preserves meaning.

There is a famous example that makes the first point well. Consider the sentence:

The police refused the students a permit because they feared violence.

Suppose that it is to be translated into a language like French in which the word for ‘police’ is feminine. Presumably the pronoun that translates ‘they’ will also have to be feminine. Now replace the word ‘feared’ with ‘advocated’. Now, suddenly, it seems that ‘they’ refers to the students and not to the police and, if the word for students is masculine, it will therefore require a different translation. The knowledge required to reach these conclusions has nothing linguistic about it. It has to do with everyday facts about students, police, violence, and the kinds of relationships we have seen these things enter into.

The second point is, of course, closely related. Consider the following question, stated in French: Ou voulez-vous que je me mette? It means literally, “Where do you want me to put myself?” but it is a very natural translation for a whole family of English questions of the form “Where do you want me to sit/stand/sign my name/park/tie up my boat?” In most situations, the English “Where do you want me?” would be acceptable, but it is natural and routine to add or delete information in order to produce a fluent translation. Sometimes it cannot be avoided because there are languages like French in which pronouns must show number and gender, Japanese where pronouns are often omitted altogether, Russian where there are no articles, Chinese where nouns do not differentiate singular and plural nor verbs present and past, and German where flexibility of the word order can leave uncertainties about what is the subject and what is the object.

The Structure of Machine Translation Systems

While there have been many variants, most MT systems, and certainly those that have found practical application, have parts that can be named for the chapters in a linguistic text book. They have lexical, morphological, syntactic, and possibly semantic components, one for each of the two languages, for treating basic words, complex words, sentences and meanings. Each feeds into the next until a very abstract representation of the sentence is produced by the last one in the chain.

There is also a ‘transfer’ component, the only one that is specialized for a particular pair of languages, which converts the most abstract source representation that can be achieved into a corresponding abstract target representation. The target sentence is produced from this essentially by reversing the analysis process. Some systems make use of a so-called ‘interlingua’ or intermediate language, in which case the transfer stage is divided into two steps, one translating a source sentence into the interlingua and the other translating the result of this into an abstract representation in the target language.

One other problem for computers is dealing with metaphor. Metaphors are a common part of language and occur frequently in the computer world:

  • How can I kill the program?
  • How do I get back into dos?
  • My car drinks gasoline

One approach treats metaphor as a failure of regular semantic rules

Compute the normal meaning of get into—dos violates its selection restrictions

dos isn’t an enclosure so the interpreter fails

Next have to search for an unconventional meaning for get into and recompute its meaning

If an unconventional meaning isn’t available, you can try using context, or world knowledge

Statistical procedures aren’t likely to generate interpretations for new metaphors.

 

Interpretation routines might result in overgeneralizations:

How can I kill dos? —> *How can I give birth to dos?

*How can I slay dos?

Mary caught a cold from John —> *John threw Mary his cold.

Catching a cold in unintentional (as opposed to catching a thief)

Getting Started

The best way to learn about language processing is to write your own computer programs. To do this, users will need access to a computer that can display information on the internet. Anyone with an email account on a personal computer has this type of access. The exercises in this class are written for the Perl programming language. This language is widely available on mainframe computers, and allows users to manipulate strings of text with a modicum of ease. In order to use Perl on a mainframe computer, however, the reader will have to access the computer directly via a terminal emulation program.

The only other item that you will need for Perl programming is a text editor. Text editors provide a means of writing the commands that make up a Perl program. Mainframe computers typically have a program that allows users to write text files. You can also use these programs to write a Perl program. The University of Kansas mainframe uses the Pico and vi editors. Once you have assembled the basic tools for creating Perl programs you are ready to begin language processing.

Intro To Computational Linguistics

Intro To Computational Linguistics

The image of humans conversing with their computers is both a thoroughly accepted cliche of science fiction and the ultimate goal of computer programming, and yet, the year 2001 has come and gone without the appearance of anything like the HAL 9000 talking computer featured in the movie 2001: A Space Odyssey.

Computational linguists attempt to use computers to process human languages. The field of computational linguistics has two general aims:

  • The technological. To enable computers to analyze and process natural language.
  • The psychological. To model human language processing on computers.

From the technological perspective, natural language applications include:

  • Speech recognition. Today, many personal computers include speech recognition software.
  • Natural language interfaces to software. For example, demonstration systems have been built that let a user ask for flight information.

Examples:

chatterbots, e.g., Alice

natural language understanding, e.g., a perl parser

Document retrieval and information extraction from written text. For example, a computer system could scan newspaper articles, looking for information about events of a particular type and enter the information into a database.

Examples:

web searches, e.g., google.

course information and enrollment, e.g., KU, Linguistics.

  • Machine translation. Computers offer the promise of quick translations between languages.

Examples:

machine translation, e.g., SDL International

The rapid growth of the Internet/WWW and the emergence of the information society poses exciting new challenges to computational linguistics. Although the new media combine text, graphics, sound and movies, the whole wealth of multimedia information can only be structured, indexed and navigated through language. For browsing, navigating, filtering and processing the information on the web, we need language technology. The increasing multilingual nature of the web constitutes an additional challenge for language technology. The multilingual web can only be mastered with the help of multilingual tools for indexing and navigating.

Computational linguists adopting the psychological perspective hypothesize that at some abstract level, the brain is a kind of biological computer, and that an adequate answer to how people understand and generate language must be in terms formal and precise enough to be modeled by a computer.

Machines, Lost In Translation: The Dream Of Universal Understanding : All Tech Considered : NPR

Machines, Lost In Translation: The Dream Of Universal Understanding : All Tech Considered : NPR

It was early 1954 when computer scientists, for the first time, publicly revealed a machine that could translate between human languages. It became known as the Georgetown-IBM experiment: an “electronic brain” that translated sentences from Russian into English.

The scientists believed a universal translator, once developed, would not only give Americans a security edge over the Soviets but also promote world peace by eliminating language barriers.

They also believed this kind of progress was just around the corner: Leon Dostert, the Georgetown language scholar who initiated the collaboration with IBM founder Thomas Watson, suggested that people might be able to use electronic translators to bridge several languages within five years, or even less.

The process proved far slower. (So slow, in fact, that about a decade later, funders of the research launched an investigation into its lack of progress.) And more than 60 years later, a true real-time universal translator — a la C-3PO from Star Wars or the Babel Fish from The Hitchhiker’s Guide to the Galaxy — is still the stuff of science fiction.

Stimulating Machines’ Brains

After decades of jumping linguistic and technological hurdles, the technical approach scientists use today is known as the neural network method, in which machines are trained to emulate the way people think — in essence, creating an artificial version of the neural networks of our brains.

Neurons are nerve cells that are activated by all aspects of a person’s environment, including words. The longer someone exists in an environment, the more elaborate that person’s neural network becomes.

With the neural network method, the machine converts every word into its simplest representation — a vector, the equivalent of a neuron in a biological network, that contains information not only about each word but about a whole sentence or text. In the context of machine learning, a science that has been developed over the years, a neural network produces more accurate results the more translations it attempts, with limited assistance from a human.

Though machines can now “learn” similarly to the way humans learn, they still face some limits, says Yoshua Bengio, a computer science professor at the University of Montreal who studies neural networks. One of the limits is the sheer amount of data required — children need far less to learn a language than machines do.

 

Paul Ford: What is Code? | Bloomberg

Paul Ford: What is Code? | Bloomberg

We are here because the editor of this magazine asked me, “Can you tell me what code is?”

“No,” I said. “First of all, I’m not good at the math. I’m a programmer, yes, but I’m an East Coast programmer, not one of these serious platform people from the Bay Area.”

I began to program nearly 20 years ago, learning via oraperl, a special version of the Perl language modified to work with the Oracle database. A month into the work, I damaged the accounts of 30,000 fantasy basketball players. They sent some angry e-mails. After that, I decided to get better.

Which is to say I’m not a natural. I love computers, but they never made any sense to me. And yet, after two decades of jamming information into my code-resistant brain, I’ve amassed enough knowledge that the computer has revealed itself. Its magic has been stripped away. I can talk to someone who used to work at Amazon.com or Microsoft about his or her work without feeling a burning shame. I’d happily talk to people from Google and Apple, too, but they so rarely reenter the general population.

The World Wide Web is what I know best (I’ve coded for money in the programming languages Java, JavaScript, Python, Perl, PHP, Clojure, and XSLT), but the Web is only one small part of the larger world of software development. There are 11 million professional software developers on earth, according to the research firm IDC. (An additional 7 million are hobbyists.) That’s roughly the population of the greater Los Angeles metro area. Imagine all of L.A. programming. East Hollywood would be for Mac programmers, West L.A. for mobile, Beverly Hills for finance programmers, and all of Orange County for Windows.

There are lots of other neighborhoods, too: There are people who write code for embedded computers smaller than your thumb. There are people who write the code that runs your TV. There are programmers for everything. They have different cultures, different tribal folklores, that they use to organize their working life. If you told me a systems administrator was taking a juggling class, that would make sense, and I’d expect a product manager to take a trapeze class. I’ve met information architects who list and rank their friendships in spreadsheets. Security research specialists love to party.

What I’m saying is, I’m one of 18 million. So that’s what I’m writing: my view of software development, as an individual among millions. Code has been my life, and it has been your life, too. It is time to understand how it all works.

Every month it becomes easier to do things that have never been done before, to create new kinds of chaos and find new kinds of order. Even though my math skills will never catch up, I love the work. Every month, code changes the world in some interesting, wonderful, or disturbing way.

How to Understand Your Computer – The New Yorker

via How to Understand Your Computer – The New Yorker.

Early on in the book, Chandra makes a very interesting claim: many programmers and I.T. professionals have no real idea how computers work, either. Because they don’t need to, essentially; they need to make them perform specific tasks, but they don’t need to understand how they perform them.

He quotes a plaintive post by a programmer named Rob P. on the Q. & A. site stackexchange.com. Rob begins by saying that he is almost embarrassed to reveal what he’s about to reveal, given that he has a degree in computer science and has worked full time as a developer for five years. “But I Don’t Know How Computers Work!” he says. “I know there are components … the power supply, the motherboard, ram, CPU, etc … and I get the ‘general idea’ of what they do. But I really don’t understand how you go from a line of code like Console.Readline() in .NET (or Java or C++) and have it actually do stuff.”

Chandra goes on to provide a fairly thorough explanation of how computers work—of the things that are physically caused to happen by these coded commands, the “mediating dialect between human and machine.” He devotes an entire chapter early in the book to the language of logic that is the native tongue of computer processors; this is the torrent of binary numbers, of ones and zeros, that constitutes the universal grammar of machines. Chandra even goes so far as to include diagrams, as well as photographs of functioning logic gates constructed from Legos.

Stanford Machine Learning

via Machine Learning.

In this course, you’ll learn about some of the most widely used and successful machine learning techniques. You’ll have the opportunity to implement these algorithms yourself, and gain practice with them. You will also learn some of practical hands-on tricks and techniques (rarely discussed in textbooks) that help get learning algorithms to work well. This is an “applied” machine learning class, and we emphasize the intuitions and know-how needed to get learning algorithms to work in practice, rather than the mathematical derivations.

Familiarity with programming, basic linear algebra (matrices, vectors, matrix-vector multiplication), and basic probability (random variables, basic properties of probability) is assumed. Basic calculus (derivatives and partial derivatives) would be helpful and would give you additional intuitions about the algorithms, but isn’t required to fully complete this course.