Why Research of Insect Imaginative and prescient Will Change Your Life

0
138

Have you ever wondered why it is so hard to hit a fly? The little dudes seem to have a sixth sense: they take off when you think you want to nail them down, and then quickly race here and there, faster than your head can turn, navigating around obstacles and ending up where you are you can’t find them.

Sure, with enough patience you can (sometimes) track down the pesky insects and fling them to the next level of existence, but think how hard it is for you – to have a brain with somewhere close to 100 billion neurons – defeat an organism with only a millionth that number of brain cells (100,000).

That’s right, an animal with a brain literally a millionth as complicated as yours – and about the size of a poppy – can outsmart and outmaneuver you for longer than you’d like to admit.

Humble as this realization is, the virtuosity of fly avoidance behaviors brings hope for a brighter future for all of us in AI technology, from self-driving cars to internet search engines that know exactly what you’re really looking for lightning-fast, accurate medical diagnoses and treatments.

I am offering the fly’s brain as a model for future AI, as someone who in recent years has had major problems developing applications of more conventional versions of AI, including popular machine learning (ML) systems like Tensor Flow and Random Forrest.

Such AI systems can do wonderful things like face recognition, speech-to-text transcription, and other “tight” tasks that are severely constrained, provided you present tons of training patterns to the AIs for the ML systems to do (and I am simplifying here) Make note of every possible combination of stimulus cues that you are likely to need to deal with in an actual operation.

But ML systems are notoriously “brittle” and break down when you present them with stimuli they have never seen before, or when you ask them to venture outside of their assigned narrow task.

Worse still, AI experts I have consulted agree that there is nothing on the horizon of AI research that promises ML systems, or other forms of AI, close to the impressive performance of a fly’s brain from afar come.

Think about what the fly does if it prevents it from dying prematurely in the kitchen: regardless of your size, your clothing, the brightness of the room, your direction or speed of approach, regardless of the light conditions in the room or the size and shape or color Of the obstacles it must avoid in order to escape you, the fly performs its evasive maneuvers brilliantly and flies just in time to avoid your claps, zigzags and jagging movements. Avoid walls, hanging pots, refrigerators, windows … whatever you call it. Then the fly has to find a safe place to land, land, wait for a safe interval, and then navigate around arbitrary obstacles back to the food source in your kitchen that originally tracked it.

In other words, unlike the best modern AIs, the fly’s brain is anything but brittle and narrow and can generalize what constitutes a threat, obstacle or safe landing spot under incredibly large fluctuations in stimulus conditions (lighting, color, shape, Size, texture, etc.).

If we could just somehow duplicate a fly’s brain in computer chips and software, perhaps we could develop a visual AI system that is as flexible, adaptable, and “non-brittle” as a fly’s brain for critical applications such as self-driving cars.

With this in mind, neuroscientist Louis Scheffer and colleagues at the Howard Hughes Medical Institute have mapped not only all the neurons in the fly brain, but all synaptic connections between those neurons, using advanced techniques such as the “tight reconstruction” of many electron microscopic sections through a fly’s brain , creating a complete “connectome” of the small animal’s brain.

Map of the Drosophila Brain

Source: CC4 base nut

This was a daunting task because as simple as it is, Dr. Scheffer et al. still had to map both the 100,000 neurons in the fly’s brain and around 20,000,000 synapses to describe this “connectome”.

Dr. Scheffer and the other Howard Hughes researchers have made this “connectome” AI researchers freely available, who could use it to reconstruct the fly’s visual system in silicon, so to speak, to fulfill, for example, self-driving cars with capabilities comparable to those of a fly .

Of course, if this idea works, the AI ​​developers who are reconstructing the fly’s brain will not know how the new vision system actually does what it does, but that is currently the case with “deep learning” neural networks that do tasks like facial recognition without the neural network designers having the first clue as to how the networks they create do what they do.

All AI designers know right now is that over many, many training attempts, the neural networks they created magically connect in a way that solves the problem at hand without them having a deep understanding of have the function of the resulting network. In the AI ​​arena, this is known as the “black box” problem, where the AI ​​works but the functionality is obscured, as if the system were locked in an opaque black box.

There will be both bad and good news when AI developers manage to integrate a fly’s brain into your next car’s AI vision system.

The bad news is that even though your car navigates flawlessly in complex environments (like a fly does), it is still a “black box”, with potentially unpredictable behavior. For example, if your car sees you come from a hardware store with a new fly swatter, it might start the engine and quickly drive away from you.

But here’s the good news: when this happens, you should be able to quickly find your evasive new vehicle by looking for the next fresh pile of dog poop your car stopped to check out.