In-memory computing can revolutionize AI

In-memory computing can revolutionize AI

There is a huge need for high performance computational devices in the world today. Whether it is sorting big data for Internet of Things applications, or helping a self-driving car “see”, today’s applications require a huge amount of number crunching to operate. Most of the number crunching is done by AI algorithms, which learn to find patterns in the enormous amounts of data that have been supplied by the system’s sensors.

This pattern finding takes a huge amount of processing power and must be done at very high speed. Once the data has been processed, the computer can calculate if any action needs to be taken and when. The processing elements are usually highly parallel, and demand a huge amount of data to operate at peak performance. Traditionally, the problem has been to provide the bandwidth for the data to flow from memory to the processing element in a timely fashion.

IBM Research has been working on this problem, and has developed a method of both storing and processing the data in memory. Doing pre-processing work on the data while it is in memory means that there is less need for a higher bandwidth link to the processing elements, and also provides a higher overall throughput of information.

To achieve the in-memory processing capability, the IBM team used phase-change devices fabricated from a germanium antimony telluride alloy that was stacked and sandwiched between two electrodes. Phase change memory is ideal for this type of application because of its faster write times, as single bits can be changed without erasing the existing data on the block of cells.

When a small current is applied to phase change material, it heats up, which in turn alters the its state from a disorganised atomic structure to an ordered crystalline state. It is during the crystallisation process that the IBM researchers managed to perform computational tasks.

The researchers demonstrated their findings using an unsupervised learning algorithm that runs on one million phase change memory devices. The demonstration was then able to find temporal correlations in the data streams. The researchers expect the process to provide a 200x improvement in speed and energy efficiency over traditional systems. This vast improvement makes the PCM in-memory computation ideal for dense, low-power and massively parallel systems for AI applications.

More details on the demonstration can be found at The final paper will appear in the peer-review journal Nature Communications. The IBM scientists will also be presenting another application of in-memory computing at the IEDM conference in December this year.