Machines, Lost In Translation: The Dream Of Universal Understanding : All Tech Considered : NPR

Machines, Lost In Translation: The Dream Of Universal Understanding : All Tech Considered : NPR

It was early 1954 when computer scientists, for the first time, publicly revealed a machine that could translate between human languages. It became known as the Georgetown-IBM experiment: an “electronic brain” that translated sentences from Russian into English.

The scientists believed a universal translator, once developed, would not only give Americans a security edge over the Soviets but also promote world peace by eliminating language barriers.

They also believed this kind of progress was just around the corner: Leon Dostert, the Georgetown language scholar who initiated the collaboration with IBM founder Thomas Watson, suggested that people might be able to use electronic translators to bridge several languages within five years, or even less.

The process proved far slower. (So slow, in fact, that about a decade later, funders of the research launched an investigation into its lack of progress.) And more than 60 years later, a true real-time universal translator — a la C-3PO from Star Wars or the Babel Fish from The Hitchhiker’s Guide to the Galaxy — is still the stuff of science fiction.

Stimulating Machines’ Brains

After decades of jumping linguistic and technological hurdles, the technical approach scientists use today is known as the neural network method, in which machines are trained to emulate the way people think — in essence, creating an artificial version of the neural networks of our brains.

Neurons are nerve cells that are activated by all aspects of a person’s environment, including words. The longer someone exists in an environment, the more elaborate that person’s neural network becomes.

With the neural network method, the machine converts every word into its simplest representation — a vector, the equivalent of a neuron in a biological network, that contains information not only about each word but about a whole sentence or text. In the context of machine learning, a science that has been developed over the years, a neural network produces more accurate results the more translations it attempts, with limited assistance from a human.

Though machines can now “learn” similarly to the way humans learn, they still face some limits, says Yoshua Bengio, a computer science professor at the University of Montreal who studies neural networks. One of the limits is the sheer amount of data required — children need far less to learn a language than machines do.

 

Brain–computer interface – Wikipedia, the free encyclopedia

Brain–computer interface – Wikipedia, the free encyclopedia

A brain–computer interface (BCI), sometimes called a mind-machine interface (MMI), direct neural interface (DNI), orbrain–machine interface (BMI), is a direct communication pathway between the brain and an external device. BCIs are often directed at assisting, augmenting, or repairing human cognitive or sensory-motor functions.

Research on BCIs began in the 1970s at the University of California, Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA.[1][2] The papers published after this research also mark the first appearance of the expression brain–computer interface in scientific literature.

IBM Design Language | Animation: Fundamentals

IBM Design Language | Animation: Fundamentals

Learn how IBM products move with the accuracy and precision of a machine.

For over one hundred years, IBM has crafted business machines for professionals around the world. From the powerful strike of a printing arm to the smooth slide of a typewriter carriage, each movement was fit for purpose and designed with intent. Our software demands the same attention to detail for making products feel lively and realistic.

We take inspiration from our heritage to define our animation style. Machines have solid planes, rigid surfaces and sharp, exact movements that are acted upon by physical forces. They don’t go from full-stop to top speed instantly or come to an abrupt stop, but instead take time to accelerate and decelerate. They have an inherent mass and move at different speeds in order to accomplish the tasks they were designed for.

Tom Vanderbilt Explains Why We Could Predict Self-Driving Cars, But Not Women in the Workplace

Tom Vanderbilt Explains Why We Could Predict Self-Driving Cars, But Not Women in the Workplace

In his book Predicting the Future, Nicholas Rescher writes that “we incline to view the future through a telescope, as it were, thereby magnifying and bringing nearer what we can manage to see.” So too do we view the past through the other end of the telescope, making things look farther away than they actually were, or losing sight of some things altogether.

These observations apply neatly to technology.

But when it comes to culture we tend to believe not that the future will be very different than the present day, but that it will be roughly the same. Try to imagine yourself at some future date. Where do you imagine you will be living? What will you be wearing? What music will you love?

Chances are, that person resembles you now. As the psychologist George Lowenstein and colleagues have argued, in a phenomenon they termed “projection bias,”1 people “tend to exaggerate the degree to which their future tastes will resemble their current tastes.”

This over- and under-predicting is embedded into how we conceive of the future. “Futurology is almost always wrong,” the historian Judith Flanders suggested to me, “because it rarely takes into account behavioral changes.” And, she says, we look at the wrong things: “Transport to work, rather than the shape of work; technology itself, rather than how our behavior is changed by the very changes that technology brings.” It turns out that predicting who we will be is harder than predicting what we will be able to do.

As the theorist Nassim Nicholas Taleb writes in Antifragile, “we notice what varies and changes more than what plays a larger role but doesn’t change. We rely more on water than on cell phones, but because water does not change and cell phones do, we are prone to thinking that cell phones play a larger role than they do.”

Tom Vanderbilt Explains Why We Could Predict Self-Driving Cars, But Not Women in the Workplace

Tom Vanderbilt Explains Why We Could Predict Self-Driving Cars, But Not Women in the Workplace

The historian Lawrence Samuel has called social progress the “Achilles heel” of futurism.8 He argues that people forget the injunction of the historian and philosopher Arnold Toynbee: Ideas, not technology, have driven the biggest historical changes. When technology changes people, it is often not in the ways one might expect: Mobile technology, for example, did not augur the “death of distance,” but actually strengthened the power of urbanism. The washing machine freed women from labor, and, as the social psychologists Nina Hansen and Tom Postmes note, could have sparked a revolution in gender roles and relations. But, “instead of fueling feminism,” they write, “technology adoption (at least in the first instance) enabled the emergence of the new role of housewife: middle-class women did not take advantage of the freed-up time … to rebel against structures or even to capitalize on their independence.” Instead, the authors argue, the women simply assumed the jobs once held by their servants.

Take away the object from the historical view, and you lose sight of the historical behavior. Projecting the future often presents a similar problem: The object is foregrounded, while the behavioral impact is occluded. The “Jetsons idea” of jetpacking and meals in a pill missed what actually has changed: The notion of a stable career, or the social ritual of lunch.

One futurist noted that a 1960s film of the “office of the future” made on-par technological predictions (fax machines and the like), but had a glaring omission: The office had no women.9 Self-driving car images of the 1950s showed families playing board games as their tail-finned cars whisked down the highways. Now, 70 years later, we suspect the automated car will simply allow for the expansion of productive time, and hence working hours. The self-driving car has, in a sense, always been a given. But modern culture hasn’t.

Voight-Kampff machine – Off-world: The Blade Runner Wiki

Voight-Kampff machine – Off-world: The Blade Runner Wiki

Originating in Philip K Dick’s novel Do Androids Dream of Electric Sheep?, the Voight-Kampff machine or device (spelled Voigt-Kampff in the book) also appeared in the book’s screen adaptation, the 1982 science fiction film Blade Runner.

The Voight-Kampff is a polygraph-like machine used by the LAPD’s Blade Runners to assist in the testing of an individual to see whether they are a replicant or not. It measures bodily functions such as respiration, heart rate and eye movement in response to emotionally provocative questions.

The Voight-Kampff machine is perhaps analogous to (and may have been partly inspired by) Alan Turing‘s work which propounded an artificial intelligence test — to see if a computer could convince a human (by answering set questions, etc.) that it was another human.

When Nerds Collide — Medium

Don’t get me wrong, I’m thrilled to bits that every day the power to translate pure thought into actions that ripple across the world merely by the virtue of being phrased correctly draws nearer and nearer to the hands of every person alive. I’m even more delighted that every day more and more people, some very similar to me and others very different, join the chorus of Those Who Speak With Machines.

But I fear for my people, the “weird nerds,” and I think I have good reason to. Brain-computer interfaces are coming, and what will happen to the weird nerds when we can no longer disguise our weirdness with silence?

via When Nerds Collide — Medium.