What do we need neuromorphic processors for? | Kaspersky official blog
Credit to Author: Hugh Aver| Date: Tue, 28 Jun 2022 22:12:36 +0000
Kaspersky recently announced investing in Motive NT, which is developing an in-house neuroprocessor “Altai”. Let’s take a look at what neuroprocessors are, how they differ from conventional processors, and why this field looks to be a very promising one in terms of computer technology development.
Computer brain
Any modern computer, tablet, smartphone, network device or digital player has a central processing unit (CPU) — a general-purpose electronic-circuitry device for executing computer code. The operating principles of the traditional processor were laid down way back in the 1940s, but, perhaps surprisingly, haven’t changed much since then: CPUs read commands and execute them sequentially. At the CPU level, any program is broken down into the simplest of tasks. These are commands like “read from memory”, “write to memory”, “add two numbers”, “multiply”, “divide”, etc. There are many nuances regarding CPU operation, but for today’s discussion what’s important is to remember that for a long time CPUs could perform only one operation per cycle. These cycles could be very numerous indeed: at first hundreds of thousands, then millions, and today billions per second. Nevertheless, until recently (the mid-2000s), a typical home computer or laptop had only one processor.
Multitasking, or the ability to execute several programs simultaneously on one CPU, was achieved through resource allocation: several clock cycles are given to one program, then the resources are assigned to another, then to a third, and so on. When affordable multicore processors came on to the market, resources could be allocated more efficiently. Then it was possible not only to run different programs on different cores, but to execute one program on several cores simultaneously. At first, this was no easy task: many programs and games for some time were not optimized for multicore or multiprocessor systems.
Today’s CPUs that can be picked up by home users can have 16 or even 32 cores. This is an impressive figure, but far from the maximum possible — even for conventional consumer technology. For instance, the Nvidia GeForce 3080Ti video card has 10,240 cores! So why the huge difference? Because traditional CPUs are far more complicated than the processing cores found on video cards. Ordinary CPUs perform a limited set of simple functions, but specialized graphics processing units (GPUs) in video cards are even more primitive, capable of only very basic operations but which they do very quickly, which comes in handy when you need to perform billions of such operations per second. Like in computer games, for example, where, say, to calculate the lighting of a scene, a lot of relatively simple computations need to be carried out for each point in the image.
Despite this nuances, the processing cores that go into conventional CPUs and video cards don’t differ fundamentally from each other. However, neuromorphic processors are radically different from both CPUs and GPUs. They do not attempt to implement a set of elements for performing arithmetic operations — sequentially or in parallel. Instead, they aim to reproduce the structure of the human brain!
In computing, the smallest building block is considered to be the lowly transistor: there are several billion such microscopic elements in a typical CPU in any computer or smartphone. In the human brain, the equivalent basic element is the neuron, or nerve cell. Neurons are connected to each other by synapses. Several tens of billions of neurons make up the human brain, which is a highly complex self-learning system. For decades, the discipline known as neuromorphic engineering has been focused on reproducing, at least partially, the structure of the human brain in the form of electronic circuits. The Altai processor, developed using this approach, is a hardware implementation of brain tissue — with all its neurons and synapses.
Neuroprocessors and neural networks
But let’s not get ahead of ourselves. Although researchers have succeeded in reproducing certain elements of the brain structure using semiconductors, this doesn’t mean we’ll be seeing digital copies of humans any time soon. Such a task is way too complicated, though it does represent the holy grail of such research. In the meantime, neuroprocessors — semiconductor copies of our brain structure — have some rather practical applications. They are needed to implement machine-learning systems and the neural networks that underpin them.
A neural network or, more precisely, an artificial neural network (as opposed to the natural one inside our head) consists of a set of cells capable of processing and storing information. The classic model of a neural network, the perceptron, was developed back in the 1960s. This set of cells can be compared to a camera matrix, but it’s also capable of learning, interpreting a resulting image, and finding patterns in it. Special connections between cells and different types of cells process information so as to distinguish, for example, between alphabet cards held in front of the lens. But that was 60 years ago. Since then, over the past decade especially, machine learning and neural networks have become commonplace in many mundane tasks.
The problem of recognizing letters of the alphabet has long been solved; as motorists know only too well, speed cameras can recognize the license plate of their vehicle — from any angle, day or night, even if covered in mud. A typical task for a neural network is to take a photo (for example, of a stadium from above) and count the number of people. These tasks have something in common: the inputs are always slightly different. An ordinary, old-fashioned program would likely be able to recognize a license plate photographed from straight ahead, but not at an angle. In this case, to train a neural network, we feed in myriad photos of license plates (or something else), and it learns to distinguish the letters and numbers it consists of (or other features the input has). And sometimes it becomes so expert that, say, in the medical field it can make a diagnosis better — or earlier — than a doctor.
But let’s get back to the implementation of neural networks. The computations required to implement a neural network algorithm are rather simple, but there are many such operations. This job best suits not a traditional CPU, but a video card with thousands, or tens of thousands, of computation modules. It’s possible also to make an even more specialized CPU that performs a set of computations needed only for a particular learning algorithm. This would be a little cheaper and a touch more effective. But all these devices still build the neural network (the set of cell nodes that perceive and process information, connected by multiple links to each other) at the software level, whereas a neuroprocessor implements the neural network scheme at the hardware level.
This hardware implementation is significantly more efficient. Intel’s Loihi neuroprocessor consists of 131,072 artificial neurons, which are interconnected by a great many more synapses (over 130 million). An important advantage of this scheme is low power consumption when idle, while conventional GPUs are energy-hungry even when not in operation. This, plus the theoretically higher performance in neural network training tasks, delivers a much lower power consumption. The first generation of the Altai processor, for instance, consumes a thousand times less power than an analogous GPU implementation.
Neural networks and security
130,000 neurons are far fewer than the tens of billions in the human brain. Research that will bring us closer to a fuller understanding of how the brain works — and create much more efficient self-learning systems — has only just begun. Importantly, neuroprocessors are in demand already, since theoretically they allow us to solve existing problems more effectively. А pattern recognizer built into your smartphone that can distinguish, say, between different the kinds of berries you’re out picking is just one example. Already, specialized CPUs for processing video and similar tasks are embedded in our smartphones and laptops en masse. Neuroprocessors take the idea of machine learning several steps further, providing a more effective solution.
Why is this area of interest to Kaspersky? First, our products already make active use of neural networks, and of machine-learning technologies in general. These include, for example, technologies for processing vast quantities of information about the operation of a corporate network: for example, monitoring data shared by nodes with each other or with the outside world. Machine-learning technologies allow us to identify anomalies in this traffic flow and find unusual activity, which may be the result of an intrusion or the malicious actions of an insider. Second, Kaspersky is developing its own operating system — KasperskyOS — which guarantees safe execution of the tasks assigned to devices under its control. Integrating hardware neural networks into KasperskyOS-based devices looks very promising for the future.
At the very end of all this progress will be the emergence of a true AI — a machine that not only solves tasks for us, but sets (and likewise solves) its own. This will be fraught with ethical issues, and it will surely be hard for folks to comprehend that a subservient machine has become smarter than its creator. Still, that’s all a long way off. About five years ago, everyone was sure that self-driving cars were literally around the corner and they just needed fine-tuning. Such systems are also closely linked to machine learning, and in 2022 the opportunities in this field are still counterbalanced by the problems. Even the narrow task of driving a car — which humans have managed with reasonably well – cannot yet be fully entrusted to a robot. That’s why new developments in this area are of great importance — at both the software and ideas level, as well as at the hardware level. All this, combined, may not lead just yet to the emergence of smart robots like in sci-fi books and movies, but it will definitely make our lives a little bit easier and safer.