We Don’t Need More Designers Who Can Code

via We Don’t Need More Designers Who Can Code

What we should be saying is that we need more designers who know about code.

The reason designers should know about code, is the same reason developers should know about design. Not to become designers, but to empathize with them. To be able to speak their language, and to understand design considerations and thought processes. To know just enough to be dangerous, as they say.

This is the sort of thing that breaks down silos, opens up conversations and leads to great work. But the key is that it also does not impede the ability of people to become true experts in their area of focus.

When someone says they want “designers who can code”, what I hear them saying is that they want a Swiss Army knife. The screwdriver, scissors, knife, toothpick and saw. The problem is that a Swiss Army knife doesn’t do anything particularly well. You aren’t going to see a carpenter driving screws with that little nub of a screwdriver, or a seamstress using those tiny scissors to cut fabric. The Swiss Army knife has tools that work on the most basic level, but they would never be considered replacements for the real thing. Worse still, because it tries to do so much, it’s not even that great at being a knife.

Professionals need specialized tools. Likewise, professional teams need specialized team members.

Technology and society – Wikipedia, the free encyclopedia

Technology and society or technology and culture refers to cyclical co-dependence, co-influence, co-production of technology and society upon the other (technology upon culture, and vice versa). This synergistic relationship occurred from the dawn of humankind, with the invention of simple tools and continues into modern technologies such as the printing press and computers. The academic discipline studying the impacts of science, technology, and society and vice versa is called (and can be found at) Science and technology studies.

Technology has become a huge part of every day societies life. When societies knows more about the development in a technology, they become able to take advantage of it. When an innovation achieves a certain point after it has been presented and promoted, this technology becomes part of the society.[1]Digital technology has entered each process and activity made by the social system. In fact, it constructed another worldwide communication system in addition to its origin.[2]

Since the creation of computers achieved an entire better approach to transmit and store data. Digital technology became commonly used for downloading music, and watching movies at home either by DVDs or purchasing it online. Digital music records are not quite the same as traditional recording media. Obviously, because digital ones are reproducible, portable and free.[3]

Source: Technology and society – Wikipedia, the free encyclopedia

Game design – Wikipedia, the free encyclopedia

Game design is the art of applying design and aesthetics to create a game to facilitate interaction between players for playful, healthful, educational, or simulation purposes. Game design can be applied both to games and, increasingly, to other interactions, particularly virtual ones (see gamification).

Game design creates goals, rules, and challenges to define a sport, tabletop game, casino game, video game, role-playing game, or simulation that produces desirable interactions among its participants and, possibly, spectators.

Academically, game design is part of game studies, while game theory studies strategic decision making (primarily in non-game situations). Games have historically inspired seminal research in the fields of probability, artificial intelligence, economics, and optimization theory. Applying game design to itself is a current research topic in metadesign.

via Game design – Wikipedia, the free encyclopedia

Adactio: Journal—Forgetting again

Adactio: Journal—Forgetting again.

I didn’t include the most pernicious and widespread lie of all:

The internet never forgets.

This truism is so pervasive that it can be presented as a fait accompli, without any data to back it up. If you were to seek out the data to back up the claim, you would find that the opposite is true—the internet is in constant state of forgetting.

Laing writes:

Faced with the knowledge that nothing we say, no matter how trivial or silly, will ever be completely erased, we find it hard to take the risks that togetherness entails.

Really? Suppose I said my trivial and silly thing on Friendfeed. Everything that was ever posted to Friendfeed disappeared three days ago:

You will be able to view your posts, messages, and photos until April 9th. On April 9th, we’ll be shutting down FriendFeed and it will no longer be available.

What if I shared on Posterous? Or Vox (back when that domain name was a social network hosting 6 million URLs)? What about Pownce? Geocities?

These aren’t the exceptions—this is routine. And yet somehow, despite all the evidence to the contrary, we still keep a completely straight face and say “Be careful what you post online; it’ll be there forever!”

The problem here is a mismatch of expectations. We expect everything that we post online, no matter how trivial or silly, to remain forever. When instead it is callously destroyed, our expectation—which was fed by the “knowledge” that the internet never forgets—is turned upside down. That’s where the anger comes from; the mismatch between expected behaviour and the reality of this digital dark age.

Being frightened of an internet that never forgets is like being frightened of zombies or vampires. These things do indeed sound frightening, and there’s something within us that readily responds to them, but they bear no resemblance to reality.

If you want to imagine a truly frightening scenario, imagine an entire world in which people entrust their thoughts, their work, and pictures of their family to online services in the mistaken belief that the internet never forgets. Imagine the devastation when all of those trivial, silly, precious moments are wiped out. For some reason we have a hard time imagining that dystopia even though it has already played out time and time again.

I am far more frightened by an internet that never remembers than I am by an internet that never forgets.

The promise of the web — Medium

The promise of the web — Medium.

I’m going to steal the phrase and say that if the web didn’t exist, it would be necessary to invent it.

To illustrate what I mean, let’s consider a www-less universe where there is a duopoly of personal communicating devices; for the sake of this argument, let’s call them Apple and Android. Both environments have different programming languages, different layout engines, and different interfaces for accessing platform capabilities.

In this alternative universe, companies start to realize that building twice is costly and inefficient. Beyond the labor costs, the communication cost of implementing features on two platforms hurts their ability to iterate and innovate.

So they start to develop tooling that abstracts above the differences between the platforms. They create a common scripting language and transcompiler, a declarative language for UIs, and a common API that delegates to the underlying platform. They write their application once, and it can run on both Apple and Android.

Another problem in this world is interoperability between applications. How does a social network application — let’s call it Facebook — reference a spreadsheet created in another program — say, Excel? Both Apple and Android recognize the issue and independently create an addressing system. The systems are similar; applications have a universal identifier and a namespace for referencing resources within the application.

Now that applications can link to each other, things are better. But there is still a bad experience when you don’t have the right application installed. Eventually the platform providers realize this need and add the ability to on-demand install an application. They also add a “light install” that automatically removes the application if you don’t continue to use it.

Sound familiar?

Philip K. Dick Theorizes The Matrix in 1977, Declares That We Live in “A Computer-Programmed Reality” | Open Culture

In the interview, Dick roams over so many of his personal theories about what these “unexpected things” signify that it’s difficult to keep track. However, at that same conference, he delivered a talk titled “If You Find This World Bad, You Should See Some of the Others” (in edited form above), that settles on one particular theory—that the universe is a highly-advanced computer simulation. (The talk has circulated on the internet as “Did Philip K. Dick disclose the real Matrix in 1977?”).

Finally, Dick makes his Matrix point, and makes it very clearly: “we are living in a computer-programmed reality, and the only clue we have to it is when some variable is changed, and some alteration in our reality occurs.” These alterations feel just like déjà vu, says Dick, a sensation that proves that “a variable has been changed” (by whom—note the passive voice—he does not say) and “an alternative world branched off.”

Dick, who had the capacity for a very oblique kind of humor, assures his audience several times that he is deadly serious. (The looks on many of their faces betray incredulity at the very least.) And yet, maybe Dick’s crazy hypothesis has been validated after all, and not simpy by the success of the PKD-esque The Matrix and ubiquity of Matrix analogies. For several years now, theoretical physicists and philosophers have entertained the theory that we do in fact live in a computer-generated simulation and, what’s more, that “we may even be able to detect it.”

via Philip K. Dick Theorizes The Matrix in 1977, Declares That We Live in “A Computer-Programmed Reality” | Open Culture.

A Brief History of Hackerdom

A Brief History of Hackerdom.

Prologue: The Real Programmers

In the beginning, there were Real Programmers.

That’s not what they called themselves. They didn’t call themselves `hackers’, either, or anything in particular; the sobriquet `Real Programmer’ wasn’t coined until after 1980, retrospectively by one of their own. But from 1945 onward, the technology of computing attracted many of the world’s brightest and most creative minds. From Eckert and Mauchly’s first ENIAC computer onward there was a more or less continuous and self-conscious technical culture of enthusiast programmers, people who built and played with software for fun.

The Real Programmers typically came out of engineering or physics backgrounds. They were often amateur-radio hobbyists. They wore white socks and polyester shirts and ties and thick glasses and coded in machine language and assembler and FORTRAN and half a dozen ancient languages now forgotten.

From the end of World War II to the early 1970s, in the great days of batch processing and the “big iron” mainframes, the Real Programmers were the dominant technical culture in computing. A few pieces of revered hacker folklore date from this era, including various lists of Murphy’s Laws and the mock-German “Blinkenlights” poster that still graces many computer rooms.

Some people who grew up in the `Real Programmer’ culture remained active into the 1990s and even past the turn of the 21st century. Seymour Cray, designer of the Cray line of supercomputers, was among the greatest. He is said once to have toggled an entire operating system of his own design into a computer of his own design through its front-panel switches. In octal. Without an error. And it worked. Real Programmer macho supremo.

The `Real Programmer’ culture, though, was heavily associated with batch (and especially batch scientific) computing. It was eventually eclipsed by the rise of interactive computing, the universities, and the networks. These gave birth to another engineering tradition that, eventually, would evolve into today’s open-source hacker culture.