Artificial Intelligence, it seems, is now everywhere. Text translation, speech recognition, book recommendations, even your spam filter is now “artificially intelligent.” But just what do scientists mean with “artificial intelligence,” and what is artificial about it?
Artificial intelligence is a term that was coined in the 1980s, and today’s research on the topic has many facets. But most of the applications we now see are calculations done with neural networks. These neural networks are designed to loosely mimic the function of the human brain, but they structurally differ from real brains in ten relevant aspects: form and function, size, connectivity, power consumption, architecture, activation potential, speed, learning technique, structure, and precision.
In my video, I briefly explain how neural networks work, and then go through these structural differences.
Sabine Hossenfelder is a Research Fellow at the Frankfurt Institute for Advanced Studies where she works on physics beyond the standard model, phenomenological quantum gravity, and modifications of general relativity. The video originally appeared on BackRe(Action), Hossenfelder’s blog. If you want to know more about what is going wrong with the foundations of physics, read her book Lost in Math: How Beauty Leads Physics Astray.