The current state of machine intelligence 2.0 – O’Reilly Media

The current state of machine intelligence 2.0 – O’Reilly Media

A year ago today, I published my original attempt at mapping the machine intelligence ecosystem. So much has happened since.

I spent the last 12 months geeking out on every company and nibble of information I can find, chatting with hundreds of academics, entrepreneurs, and investors about machine intelligence. This year, given the explosion of activity, my focus is on highlighting areas of innovation, rather than on trying to be comprehensive.

Despite the noisy hype, which sometimes distracts, machine intelligence is already being used in several valuable ways. Machine intelligence already helps us get the important business information we need more quickly, monitors critical systems, feeds our population more efficiently, reduces the cost of health care, detects disease earlier, and so on.

The two biggest changes I’ve noted since I did this analysis last year are
(1) the emergence of autonomous systems in both the physical and virtual world
and (2) startups shifting away from building broad technology platforms to focusing on solving specific business problems.

The Shape of Things — Welcome to Thington — Medium

The Shape of Things — Welcome to Thington — Medium

In particular I want to talk about the relationship we’re starting to build between physical network-connected objects and some kind of software or service layer that sits alongside them, normally interacted with via a mobile phone.

I think we all forget how quickly things can change, but I think it’s fair to say that the era of the modern smart-phone starts with the iPhone, and it’s really important to remember that only launched a little under nine years ago. This by the way, is the very first advert for the iPhone which essentially replaced single use telephones with general purpose computers connected to the phone network.

Three years after the iPhone launched — so about six years ago now — in addition to all of the desktop and laptop computers we were buying, we were also buying 150 million smart phones a year.

Five years later — 2016 — and it’s projected that 1.6 billion smartphones will be sold. In one single year, one smart phone will be bought for every five people on the planet.

But what happens next? A world of connected objects.

 

Lytros Cinema Light Field Camera Could Make Green Screens Obsolete | Variety

Lytros Cinema Light Field Camera Could Make Green Screens Obsolete | Variety

Lytro first made its name when it introduced consumer-grade photo cameras in 2012. Lytro’s photo cameras made use of light field technology to not just capture the intensity of light for any given photo, but also the direction of individual light rays. The result were data-heavy photo files that could be manipulated after the fact, allowing photographers to change the focus and other key aspects after they had taken the original photo.

Lytro’s Cinema goes far beyond what existing cameras are capable of. The camera captures 755 megapixel RAW video images with a frame rate of up to 300 frames per second and up to 16 stops of dynamic range. Add the ability to capture 3D depth information, and you have a ton of raw data than can then be used to change the focus or the depth of field after the fact, or even transition from one setting to another within a scene. “In light field technology, you can recompute all of this on the fly,” said Rosenthal.

What’s more, the ability to capture depth information for each and every pixel means that live actions scenes captured with such a camera can be easily combined with visual effects. Green screens, for example, could be a thing of the past: Filmmakers can instead just shoot scenes in natural lighting, and then separate the foreground from the background.

“Maleficent” director Robert Stromberg and award-winning visual effects specialist David Stump used that very trick for “Life,” a short film that Lytro company is going to show at NAB later this week to officially introduce its Cinema camera. “Life” was made by Stromberg’s Virtual Reality Company, which at one point shot the film’s actors in a studio parking lot, only to replace the cars with a stunning blue sky in post-production. “We are doing something that simply is not possible with today’s tech,” said Rosenthal.

Lytro is introducing Cinema as an end-to-end solution that includes a server and cloud storage to capture and process all of that raw data on as well as light field plug-ins for existing editing options. The company aims to make production packages starting at $125,000 available later this quarter, and will also offer studios to combine Cinema with its other key project: Last year, Lytro introduced a light field virtual reality camera called Immerge. Rosenthal said that Immerge and Cinema use a lot of shared infrastructure, making it easier for studios to eventually capture assets for both, and use the same sets to produce feature films and virtual reality experiences.

 

The Very Simple Idea Of A 3D Bitmap (Tokyo Art Beat)

Hideki Nakazawa’s “Art Patent Sustaining Project” @ Kandada / Project Collective Command-N

This show, organized within a series of exhibitions curated by Command-N (an activity-based art collective directed by the artist Masato Nakamura) highlights the newest activities of the artist Hideki Nakazawa, focused on the actual patents he has obtained during the past recent years.

His main patent deals with the very simple idea of a “3D bitmap”. If you know what a “pixel” (= abbreviation of pictures + element) is, you just need to think of a pixel in 3 dimensions. This 3D pixel is called “Voxel” (= combination of “volumetric” and “pixel”) and Nakazawa owns the patent for deploying any 3D bitmap art form. He claims that the purest artistic form of expression does not lie in the use of a medium, but rather in the act of creating the medium itself, just like Leonardo Da Vinci who spent a considerable amount of time just on preparing his ideal pigment. With this hypothesis, Nakazawa claims that the artistic quality of his work only resides in the following table.

Hideki Nakazawa’s “Art Patent Sustaining Project” @ Kandada / Project Collective Command-N

In this exhibition, you can take a look at the actual patent certificates that the artist obtained both in Japan and the US, along with a showcase of the 3D Bitmap editing software he directed and published in 1996.

Hideki Nakazawa’s “Art Patent Sustaining Project” @ Kandada / Project Collective Command-N

The halting genius of science-fiction writer Ted Chiang – The California Sunday Magazine

The halting genius of science-fiction writer Ted Chiang – The California Sunday Magazine

“Sometimes, people who read my work tell me, ‘I like it, but it’s not really science fiction, is it?’” he says. “And I always feel like, no, actually, my work is exactly science fiction.”

After Star Wars forever made the genre synonymous with what Chiang calls “adventure stories dressed up with lasers,” people forgot that science fiction includes the word “science” for a reason: It is supposed to be largely about exploring the boundaries of knowledge, he says. “All the things I do in my work — engaging in thought experiments, investigating philosophical questions — those are all things that science fiction does.”