Why does coding exist in the nervous system




















Furthermore, we studied the impact of the input noise variation on different coding schemes. The AWGN with different standard deviations was added to the training and inference dataset images separately. Figure 8A shows the accuracy loss after the noise was added to the training images. The noise has the largest impact on burst coding and TTFS coding schemes as the deviation increases.

Phase coding shows the highest resilience. In Figure 8B , when the noise was introduced during inference, the impact was reduced on all the coding schemes. Phase coding still shows the highest resilience. TTFS coding and rate coding are the worst affected. In phase coding scheme, the noise effect was reduced by the spike weight that decreases with the increasing phase since most of the noise values are small, and the resulting noisy spikes appear in the large phases.

In TTFS coding scheme, the input information is carried by the times of the first spikes, and the noise can easily disturb the timings. Small noise values can cause large errors in the representation. Therefore, phase coding is the most resilient scheme to both training and inference input noise, while TTFS coding has the worst overall resilience. Figure 8. A The noise was added to the training images, and B the noise was added to the inference images.

Computations in neural networks involve a massive number of parameters that drastically scale up with the number of neurons and layers, which imposes a huge burden on hardware resources, limits the processing speed, and consumes a large amount of energy. Network compression techniques were proposed to tackle these challenges. Pruning and quantization are the most favorable and efficient network compression techniques because of their simple implementation and high effectiveness. In this section, we study the impact of weight pruning on the coding schemes and evaluate the capability of each coding scheme to achieve efficient network compression.

In this work, we considered two weight pruning methods; an online pruning method and a post-training pruning method. In the online pruning method, a constant weight pruning threshold was applied while the training was in process. The pruning process started after a short training phase that involved 30, training images and continued for 10 training epochs. The pre-pruning training phase was enforced to ensure that the network learned major input features Guo et al. In the post-training pruning method, the SNN was trained for 10 epochs, and the pruning was performed before inference.

Various thresholds were used. Both network connectivity and accuracy decreased with the increasing pruning threshold. The connectivity was defined as the percentage of the unpruned weights over the total weights. Figure 9A displays the pruning results for the online pruning method. Error bars are plotted as the standard errors of the data mean. The difference between TTFS coding and other schemes during training is distinguishable. Figure 9B shows the pruning results for the post-training pruning method.

The difference between these coding schemes is negligible. Therefore, TTFS coding is least capable of achieving efficient network compression by conducting pruning while training. Figure 9. A An online weight pruning method and B a post-training weight pruning method were considered. Weight quantization brings in a huge benefit in the reduction of memory size and energy consumption.

However, quantization induces numerical errors and limits the precision of arithmetic computation. For the implementation of weight quantization, we used stochastic rounding SR method because this method ensures a non-zero probability that a small weight update will not be rounded to 0 Gupta et al. The SR method rounds the number x to the fixed-point number with a probability proportional to the difference between them, which is described by.

The simulation results of accuracy loss caused by quantization for different coding schemes are shown in Figure The results for the training phase were obtained after 10 epochs. Error bars were added after 10 simulations for each scheme. From Figure 10A , severe accuracy drop happens when the bit width is reduced below six bits for quantization during training.

For post-training quantization, significant accuracy loss is observed when the bit width is less than two bits Figure 10B. Rate coding can be seen to have the worst accuracy loss in both cases, while burst coding has the smallest loss when the bit is smaller than four bit.

Figure Although digital implementations are widely used for neuromorphic systems, their performance is limited in the current machine learning applications.

Analog computation paves the way to achieve Tera operations per second per watt efficiency which is x compared to digital implementations Gil and Green, Various types of analog devices have been used to implement neural networks, such as CMOS transistors Indiveri et al. Despite the potential of these devices in analog computation, they suffer from many non-idealities which could limit the performance, such as limited precision, programming variability, stuck-at-fault SAF defects, retention and others Fouda et al.

In this work, we study the impact of two main types of synaptic variations, namely, synaptic noise and synaptic SAF defects, which are induced by device non-idealities existing in analog hardware. Synaptic noise is mainly induced when programming synaptic analog devices Sheng et al.

To test the noise resilience of the coding schemes, we assume that all the synaptic weights are stored in synaptic devices. The synaptic noise mostly results from programming noise when weight writes are performed. During training, we added Gaussian noise to each quantized weight update to model the effect of programming noise on the weights stored in synaptic devices. The model is described by. The accuracy loss results on the MNIST dataset were obtained after 10 training epochs and are shown in Figure 11 for different coding schemes.

In the case of 12 bits, the noise has no adverse effect on the accuracy Figure 11A. Due to small precision, the added noise becomes small. When the bit width is decreased from 12 bits to eight bits, the accuracy loss starts to grow with the noise level, as shown in Figures 11B—D.

Particularly, with 8-bit width, the network fails to learn input features even when a small noise was added. Phase coding causes the most severe accuracy loss, while TTFS coding has the best resilience. This phenomenon can be explained by the number of weight updates resulting from the coding schemes.

Because of the highest spike activity, phase coding causes the most updates during training. Whereas, TTFS coding has the least updates. Since the noise comes with each update, TTFS coding is least affected.

There is a small overall performance difference between burst coding and rate coding because they require a similar number of updates. Moreover, the impact of post-training programming noise was considered. In this case, the SNN was trained offline without any synaptic noise. The well-trained weights were quantized with the SR method and mapped onto synaptic devices. During the mapping, the programming noise was added to the quantized weights. Obviously, the impact of the post-training synaptic noise is much smaller than that of the training synaptic noise.

When the bit width decreases from eight bits to one bit, the loss increases with the noise variation, as shown in Figures 12B—D. The error bars are helpful to distinguish the burst coding from the other coding schemes. We can claim that burst coding shows the best resilience to the added post-training synaptic noise. The difference among other coding schemes is negligible. The largest loss difference increases from 1. Accuracy loss on MNIST dataset in the SNN with different coding schemes after adding programming noise to the quantized weight updates during training.

The quantized bit width was changed from A 12 bits, B 11 bits, C 10 bits, to D eight bits. Faulty devices are generally encountered in analog computing systems, due to many reasons, such as fabrication process variations, spot defects, aging phenomenon, mechanical stress, heavy device testing and utilization, etc. Lewyn et al. Vatajelu E. We chose the synaptic SAF model in this study since it appears very often in hardware, especially in the promising and newly emerging analog devices, and it has a profound impact on hardware performance El-Sayed et al.

A SAF device has its conductance state fixed at either a high or low conductance state. We applied the SAF defect model in our simulation to demonstrate the degree of fault tolerance the coding schemes can bear.

This is mainly because the input patterns are very sparse, filled mostly with 0 s. Rate coding has the worst synaptic fault tolerance during training. We have also investigated the impact of SAF defect on the coding schemes during inference. Since the input patterns in the Fashion dataset are more complex than those in MNIST dataset, the impact of synaptic faults on these coding schemes becomes more obvious, and hence the difference among these coding schemes becomes more significant, as shown in Figure These results further confirm that rate coding is most susceptible to the synaptic fault.

This could be due to that the SAF defect could happen to any device and hence the SAF fault devices were randomly selected before training starts, which induces randomness in the weights and hence adds more uncertainty in rate coding that encodes information in stochastic spike trains. Therefore, we can conclude that rate coding has the worst synaptic fault tolerance during training, while the other coding schemes have similar performance.

To provide a comprehensive comparison among different coding schemes, we summarized their performance in 10 aspects for both training and inference phases, as shown in Figure 15A,B. In Figure 15A , the latency refers to the effective training latency.

In the cases of pruning, quantization, input noise, synaptic noise, and synaptic fault, the average accuracy loss for each coding scheme was normalized and plotted for comparison.

A greater value in each dimension leads to better performance. For pruning and quantization, the average accuracy loss across the whole range was computed and normalized for each coding scheme. The average accuracy loss at the bit width was computed and normalized for synaptic noise during training. The results for eight bits and 11 bits can also be used to compute the loss since they showed the same performance order among these methods. The average accuracy loss at the 1-bit width was used for synaptic noise during inference.

In the case of input noise, both noise type and noise variations were considered. To evaluate the overall resilience to different noise types, we used the average accuracy loss on all the noisy datasets for each coding scheme. For noise variations, the average loss was computed.

Then, the normalized loss for each coding scheme was obtained by taking the average over the two normalized loss values. Quantitative comparisons among different coding schemes from various aspects for A training and B inference.

In each dimension, the data were normalized with the min-max normalization method. In the cases of pruning, quantization, input noise, synaptic noise, and synaptic fault, the average accuracy loss for each coding scheme was used. The greater value, the better. Tables 8 , 9 summarize the qualitative comparisons among different coding schemes according to the results in Figure The more the number of check marks is, the better performance the coding scheme has in each category.

During the training phase, rate coding has good compression performance by pruning, good resilience to input noise. But it suffers from the lowest accuracy, the highest latency, the worst compression performance by quantization, and the worst fault tolerance. But it has large area, the worst compression effectiveness by pruning, lowest resilience to input noise, and bad tolerance to synaptic fault. Phase coding shows the smallest area, good compression performance by quantization, the highest input noise resilience, good synaptic fault tolerance.

But it has the largest number of SOPs and power consumption, and the worst synaptic noise resilience. Burst coding shows the shortest latency, small number of SOPs and power consumption, the best compression performance by pruning and quantization, and the best synaptic fault tolerance.

But it has the disadvantage of the largest area and poor noise resilience. Table 8. Qualitative comparisons among different coding schemes from various aspects for training.

Table 9. Qualitative comparisons among different coding schemes from various aspects for inference. During the inference phase, the difference among these schemes in all the dimensions on the left half circle becomes less significant. In the cases of pruning and synaptic fault, there is no difference among these schemes. Area and power consumption are the same as in the training phase. Rate coding has good resilience to input noise but suffers from the highest latency and the worst compression performance by quantization.

TTFS still holds the same advantages except for the worst synaptic noise resilience. The resilience of phase coding to synaptic noise becomes better than rate coding and TTFS coding. For burst coding, the compression performance by quantization becomes the best. Clearly, based on the discussion above, no coding scheme is perfect in all aspects, and each coding scheme has its advantages and drawbacks. The choice of the neural coding scheme depends on the constraints and considerations in the design.

This comparative analysis of different neural coding schemes shows how to select the best coding scheme in different scenarios. For example, if computational performance and hardware performance are the primary concern in the design, the best choice would be TTFS coding. If network performance is largely affected by input noise, the best choice would be phase coding. If network compression is the main consideration, the best choice would be burst coding.

If the network performance is largely limited by hardware non-idealities, the best choice would be burst coding. It is worth mentioning that due to the simplicity of the rate coding, SNNs and neuromorphic hardware have mainly relied on rate coding without investigating other coding techniques. Our study shows that the other coding schemes can outperform rate coding in many aspects which prove that rate coding is not always the best choice.

In the previous work, different neural coding schemes were compared in terms of classification accuracy, latency, the number of spikes, and energy during inference Park et al. The comparison revealed that TTFS coding won against the other coding schemes in classification and computational performance. The advantage of TTFS coding was expected because it used precise timing and only one spike. Our work also demonstrated the excellent performance of TTFS coding during inference in terms of classification performance, computational performance, and power consumption.

Most importantly, we looked into the real-time applications of the neural coding schemes in neuromorphic systems and investigated their performance in various aspects, including the hardware implementation, effectiveness of network compression, noise resilience, and fault tolerance. We discuss two examples of processing the temporal order of external events: the auditory location detection system in birds and the visual direction detection system in flies. We then discuss how somatosensory stimulus intensities are translated into a temporal order code in the human peripheral nervous system.

We next turn our attention to input order coding in the mammalian cortex. We review work demonstrating the capabilities of cortical neurons for detecting input order. We then discuss research refuting and demonstrating the representation of stimulus features in the cortex by means of input order. After some general theoretical considerations on input order detection and coding, we conclude by discussing the existing and potential use of input order coding in neuromorphic engineering.

Sign In or Create an Account. Advanced Search. User Tools. Sign In. Skip Nav Destination Article Navigation. Close mobile search navigation Article navigation. Volume 25, Issue 2. Previous Article Next Article.

Subscriber sign in You could not be signed in, please check and try again. Username Please enter your Username. Password Please enter your Password. Forgot password? You could not be signed in, please check and try again. Sign in with your library card Please enter your library card number.

If you think you should have access to this title, please contact your librarian. All rights reserved. The British empiricists John Locke , George Berkeley and David Hume proposed that all knowledge is based on sensory experience: vision, hearing, taste, smell and touch.

This led French philosopher Auguste Comte to argue that the study of behavior should be a subdiscipline within biology, and that the rules of operation of the mind should be derived from objective observation. They showed that the senses differ in their modes of reception, but involve the same three stages of processing: a physical stimulus, transformation of the stimulus into nerve impulses, and a response to this signal after perception or conscious experience.

Early sensory psychophysics studies by Weber and Fechner showed that sensory systems always transmit four basic types of information: modality, location, intensity and timing. Along with Helmholz and von Frey, these psychophysicists used experimental data about the sensitivity of sensory systems to formulate mathematical laws that predicted the relationship between stimulus magnitude and sensory discrimination.

Sensory experience has four fundamental attributes that are encoded by specialized subcategories of neurons within the nervous system. Information regarding the site of stimulation on the body or the location of a stimulus in space, the size and shape of objects, and the fine detail of a stimulus or of the environment are crucially represented by the spatial arrangement of stimulated receptors in a sense organ.

The spatial area within which stimulation excites a sensory neuron is referred to as its receptive field. In vision and somatic sensation, this receptive field confers a specific topographic location to the sensory output of the corresponding sensory neuron. A receptor will only respond to stimulation within its receptive field, and a stimulus larger than a single receptive field will activate neighboring receptors.

Thus stimulus size affects the number of receptors that are stimulated. The resolution of a sensory system can be a function of its receptor density. Finer resolution of spatial detail is possible with denser receptor populations since each receptor will have a more restricted receptive field.

Receptor density, however, is not uniform throughout a sensory sheet. The resulting differences in the density of afferent inputs to the central nervous system leads to discrepancies in the topographic representation of various body parts in central nervous system maps: densely innervated body parts occupy the largest areas, while sparsely innervated parts occupy the smallest areas. The receptors for hearing, taste and smell are spatially distributed according to the sensitivity, or energy spectrum, of these modalities.

Auditory receptors are organized according to the sound frequency to which they respond, while chemoreceptors on receptive surfaces in the nose and on the tongue are distributed based on their particular chemical sensitivity.

The intensity or amount of sensation experienced is a function of stimulus strength. The sensory threshold is the lowest stimulus strength a subject can detect. These thresholds are usually determined statistically through exposure of a subject to a series of stimuli of random amplitude. The percentage of times that a subject reports detecting the stimulus is plotted against stimulus amplitude, yielding a psychometric function. The sensory threshold is conventionally the stimulus amplitude detected in half the trials.

The sensitivity of receptors for a modality limits the sensory threshold; threshold energy is tied to the minimum stimulus amplitude that generates action potentials in a sensory nerve.



0コメント

  • 1000 / 1000