Learning machine learning — Benedict Evans
As has happened with many technologies before, AI is bursting out of universities and research labs and turning into product, often led by those researchers as they turn entrepreneur and create companies. Lots of things started working, the two most obvious illustrations being the progress for ImageNet and of course AlphaGo. And in parallel, many of these capabilities are being abstracted – they’re being turned into open source frameworks that people can pick up (almost) off the shelf. So, one could argue that AI is undergoing a take-off in practicality and scale that’s going to transform tech just as, in different ways, packets, mobile, or open source did.
This also means, though, that there’s a sort of tech Tourettes’ around – people shout ‘AI!’ or ‘MACHINE LEARNING!’ where people once shouted ‘OPEN!’ or ‘PACKETS!’. This stuff is changing the world, yes, but we need context and understanding. ‘AI’, really, is lots of different things, at lots of different stages. Have you built HAL 9000 or have you written a thousand IF statements?
Back in 2000 and 2001 (and ever since) I spent a lot of my time reading PDFs about mobile – specifications and engineers’ conference presentations and technical papers – around all the layers of UMTS, WCDMA, J2ME, MEXE, WML, iAppli, cHTML, FeliCa, ISDB-T and many other things besides, some of which ended up mattering and some of which didn’t. (My long-dormant del.icio.us account has plenty of examples of both).
The same process will happen now with AI within a lot of the tech industry, and indeed all the broader industries that are affected by it. AI brings a blizzard of highly specialist terms and ideas, layered upon each other, that previously only really mattered to people in the field (mostly, in universities and research labs) and people who took a personal interest, and now, suddenly, this starts affecting everyone in technology. So, everyone who hasn’t been following AI for the last decade has to catch up.