
“In the late 1980s, LeCun, then a researcher at AT&T Bell Labs, developed a powerful neural network that learned to recognise handwritten zip codes by training on thousands of examples. A parallel development soon unfolded at Harvard and Brown. In 1995, Zhu and a team of researchers there started developing probability-based methods that could learn to recognise patterns and textures (…) and even generate new examples of that pattern. These were not neural networks: members of the “Harvard-Brown school”, as Zhu called his team, cast vision as a problem of statistics and relied on methods such as “Bayesian inference” and “Markov random fields”. The two schools spoke different mathematical languages and had philosophical disagreements. But they shared an underlying logic – that data, rather than hand-coded instructions, could supply the infrastructure for machines to grasp the world and reproduce its patterns – that exists in today’s AI systems such as ChatGPT.”