Greetings, dear readers!

In our recent articles, we discussed SpiNNaker—a unique solution for simulating the operation of spiking neural networks. Today, we turn to another innovative approach: the Tianjic architecture, which brings together engineering solutions for both artificial neural networks (ANN) and spiking neural networks (SNN). Why focus on Tianjic? Unlike specialized solutions such as TrueNorth, Tianjic offers a universal hybrid approach capable of running diverse neural models on a single chip. This opens up new frontiers in the development of artificial general intelligence (AGI).

Modern artificial intelligence (AI) has achieved impressive success in solving specialized tasks—from image and voice recognition to playing chess and managing autonomous vehicles. However, most existing AI systems remain “narrow”—they perform well only within predefined scenarios and require massive computational resources. Artificial general intelligence (AGI), capable of universal learning, adaptation, and multitasking, is still only an ambitious goal.

One of the main reasons for this gap lies in the fundamental differences between the two primary approaches to AI:

  1. Computer science-based methods leveraging artificial neural networks (ANN)—for example, deep convolutional or recurrent networks. These demonstrate high accuracy in image, speech, and text processing, but they demand substantial computational power and struggle with real-time processing or incomplete data.

  2. Neuromorphic approaches, inspired by the biology of the brain, using spiking neural networks (SNN). These networks mimic the way biological neurons operate by processing data via event-based impulses (spikes), achieving high energy efficiency. However, training and real-world application of these networks remain challenging.

For many years, these two directions have evolved separately, requiring different algorithms, models, and—most importantly—incompatible hardware platforms. ANNs run on GPUs and TPUs optimized for floating-point operations, while SNNs require specialized neuromorphic chips such as IBM TrueNorth or SpiNNaker. Tianjic changes this paradigm. It is the first chip of its kind to unite both approaches in a single hardware platform. Tianjic not only accelerates AI algorithms but also introduces a new architectural philosophy: a hybrid integration of ANN and SNN. As a result, Tianjic moves us closer to hardware solutions for AGI, where different types of neural networks can seamlessly interact, complementing each other’s strengths.

History of Tianjic creation: from the idea to the first prototypes

Prerequisites

In the early 2010s, the scientific community urgently needed to merge the two AI paradigms: neuromorphic computing based on spiking neural networks (SNN) and deep learning (ANN). Despite each field’s success, their implementation on separate hardware platforms limited opportunities for interdisciplinary research and flexibility in applications. This challenge became the starting point for developing an architecture capable of uniting both approaches within a single device.

Ideas and Objectives

The notion of creating a hybrid platform that could efficiently handle both spiking (SNN) and continuous (ANN) networks emerged at Tsinghua University in Beijing. There, under the leadership of Professor Luping Shi, a group of scientists came together at the Center for Brain Inspired Computing Research (CBICR). They observed that separate hardware solutions for spiking and classical neural models were significantly impeding progress in hybrid systems. Their ambitious goal was to move closer to the hardware foundations of artificial general intelligence (AGI) by integrating the biological plausibility of neuromorphic networks with the computational power of traditional deep learning on a single chip.

Concept Development and Early Prototypes

Active work on the architecture—later named Tianjic—took place from 2015 to 2017 and resulted in the first generations of the chip. The researchers established a set of technical requirements for what they called a “Unified Functional Core” (UFC). Within each UFC, a set of hardware blocks could switch between a spiking mode and a continuous mode, providing both threshold-based signal generation and standard operations such as vector-matrix multiplication (for instance, for ReLU activation). To ensure scalability, each UFC was equipped with local memory (LSM), and data routing was managed by a Network Connector (NC) that supported the transmission of both spiking events and numeric weights.

Hardware Implementation

By 2018, the topological design was complete, and a test version of Tianjic was manufactured. A 28-nm fabrication process was used, allowing more than a hundred UFC cores to be integrated on a single chip. According to the authors, the early Tianjic version contained 156 such “universal blocks.”

Scientific Impact

In 2019, the journal Nature published an article, «Towards artificial general intelligence with hybrid Tianjic chip architecture», featuring Tianjic on its cover. The paper garnered significant attention in both the scientific community and the media. The authors vividly demonstrated that the same chip could run classical operations of convolutional neural networks (CNNs) and recurrent neural networks with long short-term memory (LSTM) mechanisms, as well as process spiking signals characteristic of biologically plausible models. Moreover, energy consumption remained at or below the level of certain specialized GPUs and ASIC solutions.

image

Fig. 1. The topology of the Tianjic microchip and its integration into a multi-chip computing system

Further Development

Since the publication, the Tianjic team has continued refining the chip, releasing new versions with more advanced manufacturing processes and adding modes for biologically inspired dynamic networks (BDyNN). At the same time, software tools—including compilers and frameworks—have been developed to automate the mapping of various types of neural networks onto UFC cores, enabling row/column skipping for unimportant elements. In the AI world, Tianjic is no longer seen as an oddity but rather as an important step toward a “general” neuro-computational approach. In this approach, classic deep learning can coexist with spiking algorithms on the same chip, forming a foundation for future AGI hardware platforms.

Tianjic Architecture

The key component of the architecture is a set of specialized computational cores called FCore. Each core implements a module that simulates a basic unit of neural processing and includes the following structural components:

  • Axon (input interface): Receives incoming signals, which can be either analog values (for ANN) or discrete impulses (for SNN).

  • Dendrite (with local memory): Each FCore contains built-in SRAM arranged in a matrix (for example, 256×256) to store weights for synaptic connections. This local memory significantly reduces access latency compared to the traditional “chip–main memory” architecture because most operations take place without resorting to external memory.

  • Soma (nonlinear processing block): Depending on the operating mode, the soma block either performs an arbitrary nonlinear function (implemented via a programmable LUT) for ANN mode, or models spiking neuron dynamics (e.g., a Leaky Integrate-and-Fire, LIF, model) for SNN mode.

  • Router (data router): Responsible for data transfer between cores using a unified protocol for both computing types. It ensures high-speed, asynchronous interaction among FCores, which is critical for maintaining parallelism and distributed data processing.

image

Fig. 2. Overview of the FCore architecture

Each Tianjic core is designed to be dynamically configurable for either ANN or SNN operation. This versatility allows the same hardware block to handle heterogeneous computational tasks, which is especially important for hybrid neural networks.

A Hybrid Computing Model

One of Tianjic’s standout features is its ability to simultaneously perform continuous computations and process spiking signals. This is achieved through:

  • Mode Switching: Each FCore’s configuration is set by a compiler that determines how a particular core will operate. Within a single neural network, some layers can perform traditional operations (convolutions, fully connected layers), while others handle biologically plausible spiking.

  • Unified Packet Format: A single protocol is used for data exchange between cores, encoding both continuous values and event-based signals (Address-Event Representation, AER). This simplifies the integration of heterogeneous computing models into one system and enables efficient handling of asynchronous events.

image

Fig. 3. Example of hybrid neural net ANN/SNN

Communication Network

Interaction among FCores is implemented through a two-dimensional (2D) mesh network, which provides several advantages:

  • High Throughput: Local connections enable data transfer between cores with minimal latency, achieving internal data rates on the order of hundreds of gigabytes per second.

  • Asynchronous Exchange: By using AER, cores transmit information only when an event occurs, saving energy and reducing unnecessary data traffic.

  • Scalability: The mesh network architecture allows multiple chips to be integrated into one system, expanding the number of neurons and synapses when tackling more complex models.

image

Fig. 4. Scalable routing infrastructure

Energy Efficiency Optimization

Tianjic is designed with minimal energy consumption in mind. Local data storage, asynchronous data exchange, and specialized computing blocks significantly reduce energy costs during neural network operations. The use of 8-bit arithmetic for weights and activations has only a minor impact on computational accuracy, making the architecture ideal for applications where speed and energy efficiency are paramount.

Role of Specialized Hardware Optimizations

The technical solutions implemented in Tianjic reflect a targeted optimization for hybrid computational tasks:

  • Localizing Computations: Direct storage of weights and intermediate data in each FCore minimizes the need for slow external memory calls.

  • Programmable Flexibility: The ability to switch FCore operation modes allows the hardware to be dynamically adapted to specific computational models, which is critical when implementing hybrid neural networks.

  • Synchronization and Routing: The routing network, based on asynchronous packet exchange, ensures that even under a high degree of parallelism, data is transferred with minimal latency, critical for real-time processing.

image

Fig. 5. Topological diagram of UFC cores

Tianjic’s architecture is a comprehensive, innovative solution combining the strengths of traditional neural network models (ANN) with those of biologically inspired spiking neural networks (SNN). This hybrid approach not only significantly boosts inference speed and reduces power consumption but also provides a platform for further research into artificial general intelligence. Its multi-layered design—from specialized computing cores to a high-speed communication network—demonstrates how a deep integration of hardware and software can tackle challenges once considered incompatible on a single device.

We now conclude the first part of our article, where we explored Tianjic’s key ideas, architectural features, and the history of its development. In the second part, we will continue our discussion by looking at how to program the system, examining its practical demonstrations, and comparing Tianjic with other neuromorphic solutions—SpiNNaker and TrueNorth. We will also address the limitations and challenges developers face on the path to creating hardware platforms for AGI. Stay tuned, and see you soon!

Thank you for being with us!
Regards, the MemriLab Team!

Sources:

  1. Towards artificial general intelligence with hybrid Tianjic chip architecture

  2. Tianjic: A Unified and Scalable Chip Bridging Spike-Based and Continuous Neural Computation

  3. Neuromorphic artificial intelligence systems

  4. News-Tsinghua University