Computers struggle with tasks we find simple. But try to describe explicitly the difference between the handwritten numerals 1 and 7, and you begin to appreciate the problem. Professor Martin Anthony explains what role mathematicians play in making computers less stupid.
Diagnosing tumours, playing video games, detecting credit card fraud, recognising faces, reading handwriting they dont seem like similar tasks, but they are all cases where 'machine learning' is employed to enable computers to make intelligent decisions. And although the various tasks look very different, the mathematics behind them is remarkably similar, as Professor Martin Anthony explains in this short film.
When computers fail to do something we find easy reading handwriting, recognising faces its tempting to think of them as stupid machines. But its often the case that tasks we find relatively easy to perform evade explicit codification. How, for example, would you specify rules which correctly identified cats and only cats including three-legged cats but excluded dogs?
Employing ideas from probability theory, statistics, linear algebra, geometry and discrete mathematics, machine learning aims to generate systems of instructions algorithms that allow computers to perform cognitive-style tasks. In abstract terms, machine learning involves detecting patterns in very large datasets, clustering together similar objects and distinguishing dissimilar ones. This could help with the detection of anomalies (as with the identification of malignant tumours or fraudulent credit card usage), or it could be used to recognise patterns making sense of handwritten characters, for example.
But despite the extraordinary real-world effects this theoretical work makes possible, Professor Anthony, like many mathematicians, isnt directly concerned with the uses to which his work is eventually put I think of myself as an applicable mathematician, he says, if it wasnt interesting mathematically, Id probably be doing something else.