The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.11 - Nov. (2013 vol.35)
pp: 2706-2719
J. A. Perez-Carrasco , Dept. Teor. de la Senal y Comun., Univ. of Sevilla, Sevilla, Spain
Bo Zhao , Sch. of Electr. & Electron. Eng., Nanyang Technol. Univ., Singapore, Singapore
C. Serrano , Dept. Teor. de la Senal y Comun., Univ. of Sevilla, Sevilla, Spain
B. Acha , Dept. Teor. de la Senal y Comun., Univ. of Sevilla, Sevilla, Spain
T. Serrano-Gotarredona , Inst. de Microelectron. de Sevilla, Sevilla, Spain
Shouchun Chen , Sch. of Electr. & Electron. Eng., Nanyang Technol. Univ., Singapore, Singapore
B. Linares-Barranco , Inst. de Microelectron. de Sevilla, Sevilla, Spain
ABSTRACT
Event-driven visual sensors have attracted interest from a number of different research communities. They provide visual information in quite a different way from conventional video systems consisting of sequences of still images rendered at a given "frame rate." Event-driven vision sensors take inspiration from biology. Each pixel sends out an event (spike) when it senses something meaningful is happening, without any notion of a frame. A special type of event-driven sensor is the so-called dynamic vision sensor (DVS) where each pixel computes relative changes of light or "temporal contrast." The sensor output consists of a continuous flow of pixel events that represent the moving objects in the scene. Pixel events become available with microsecond delays with respect to "reality." These events can be processed "as they flow" by a cascade of event (convolution) processors. As a result, input and output event flows are practically coincident in time, and objects can be recognized as soon as the sensor provides enough meaningful events. In this paper, we present a methodology for mapping from a properly trained neural network in a conventional frame-driven representation to an event-driven representation. The method is illustrated by studying event-driven convolutional neural networks (ConvNet) trained to recognize rotating human silhouettes or high speed poker card symbols. The event-driven ConvNet is fed with recordings obtained from a real DVS camera. The event-driven ConvNet is simulated with a dedicated event-driven simulator and consists of a number of event-driven processing modules, the characteristics of which are obtained from individually manufactured hardware modules.
INDEX TERMS
Neurons, Sensors, Voltage control, Visualization, Feature extraction, Neural networks, Dynamic range,high speed vision, Feature extraction, convolutional neural networks, object recognition, spiking neural networks, event-driven neural networks, bioinspired vision
CITATION
J. A. Perez-Carrasco, Bo Zhao, C. Serrano, B. Acha, T. Serrano-Gotarredona, Shouchun Chen, B. Linares-Barranco, "Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate Coding and Coincidence Processing--Application to Feedforward ConvNets", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.35, no. 11, pp. 2706-2719, Nov. 2013, doi:10.1109/TPAMI.2013.71
REFERENCES
[1] P. Lichtsteiner, C. Posch, and T. Delbrück, "A 128x128 120dB 30mW Asynchronous Vision Sensor That Responds to Relative Intensity Change," Proc. IEEE Int'l Solid-State Circuits Conf., 2006.
[2] P. Lichtsteiner, C. Posch, and T. Delbrück, "A 128x128 120 dB 15µs Latency Asynchronous Temporal Contrast Vision Sensor," IEEE J. Solid-State Circuits, vol. 43, no. 2, pp. 566-576, Feb. 2008.
[3] J. Kramer, "An Integrated Optical Transient Sensor," IEEE Trans. Circuits and Systems, Part II, vol. 49, no. 9, pp. 612-628, Sept. 2002.
[4] C. Posch, D. Matolin, and R. Wohlgenannt, "A QVGA 143dB Dynamic Range Asynchronous Address-Event PWM Dynamic Image Sensor with Lossless Pixel-Level Video-Compression," Proc. IEEE Int'l Solid-State Circuits Conf., pp. 400-401, Feb. 2010.
[5] T. Serrano-Gotarredona and B. Linares-Barranco, "A 128x128 1.5% Contrast Sensitivity 0.9% FPN 3us Latency 4mW Asynchronous Frame-Free Dynamic Vision Sensor Using Transimpedance Amplifiers," IEEE J. Solid-State Circuits, vol. 48, no. 3, pp. 827-838, Mar. 2013.
[6] F. Gomez-Rodríguez et al., "AER Tools for Communications and Debugging," Proc. IEEE Int'l Symp. Circuits and Systems, pp. 3253-3256, May 2006.
[7] E. Chicca, A.M. Whatley, V. Dante, P. Lichtsteiner, T. Delbrück, P. Del Giudice, R.J. Douglas, and G. Indiveri, "A Multi-Chip Pulse-Based Neuromorphic Infrastructure and Its Application to a Model of Orientation Selectivity," IEEE Trans. Circuits and Systems I, Regular Papers, vol. 5, no. 54, pp. 981-993, May 2007.
[8] R. Serrano-Gotarredona, T. Serrano-Gotarredona, A. Acosta-Jimenez, and B. Linares-Barranco, "A Neuromorphic Cortical-Layer Microchip for Spike-Based Event Processing Vision Systems," IEEE Trans. Circuits and Systems, Part I: Regular Papers, vol. 53, no. 12, pp. 2548-2566, Dec. 2006.
[9] R. Serrano-Gotarredona, M. Oster, P. Lichtsteiner, A. Linares-Barranco, R. Paz-Vicente, F. Gómez-Rodríguez, L. Camuñas-Mesa, R. Berner, M. Rivas, T. Delbrück, S.C. Liu, R. Douglas, P. Häfliger, G. Jiménez-Moreno, A. Civit, T. Serrano-Gotarredona, A. Acosta-Jiménez, and B. Linares-Barranco, "CAVIAR: A 45k-Neuron, 5M-Synapse, 12G-Connects/Sec AER Hardware Sensory-Processing-Learning-Actuating System for High Speed Visual Object Recognition and Tracking," IEEE Trans. Neural Networks, vol. 20, no. 9, pp. 1417-1438, Sept. 2009.
[10] L. Camuñas-Mesa, A. Acosta-Jiménez, C. Zamarreño-Ramos, T. Serrano-Gotarredona, and B. Linares-Barranco, "A 32x32 Pixel Convolution Processor Chip for Address Event Vision Sensors with 155ns Event Latency and 20.gif Throughput," IEEE Trans. Circuits and Systems, vol. 58, no. 4, pp. 777-790, Apr. 2011.
[11] L. Camuñas-Mesa, C. Zamarreño-Ramos, A. Linares-Barranco, A. Acosta-Jiménez, T. Serrano-Gotarredona, and B. Linares-Barranco, "An Event-Driven Multi-Kernel Convolution Processor Module for Event-Driven Vision Sensors," IEEE J. Solid-State Circuits, vol. 47, no. 2, pp. 504-517, Feb. 2012.
[12] Y. LeCun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, and L.D. Jackel, "Backpropagation Applied to Handwritten Zip Code Recognition," Neural Computation, vol. 1, no. 4, pp. 541-551, 1989.
[13] K. Chellapilla, M. Shilman, and P. Simard, "Optimally Combining a Cascade of Classifiers," Proc. Document Recognition and Retrieval 13, p. 6067, 2006.
[14] R. Vaillant, C. Monrocq, and Y. LeCun, "Original Approach for the Localisation of Objects in Images," IEE Proc. Vision, Image, and Signal Processing, vol. 141, no. 4, pp. 245-250, Aug. 1994.
[15] M. Osadchy, Y. LeCun, and M. Miller, "Synergistic Face Detection and Pose Estimation with Energy-Based Models," J. Machine Learning Research, vol. 8, pp. 1197-1215, May 2007.
[16] C. Garcia and M. Delakis, "Convolutional Face Finder: A Neural Architecture for Fast and Robust Face Detection," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26, no. 11, pp. 1408-1423, Nov. 2004.
[17] F. Nasse, C. Thurau, and G.A. Fink, "Face Detection Using GPU Based Convolutional Neural Networks," Proc. 13th Int'l Conf. Computer Analysis of Images and Patterns, pp. 83-90, 2009.
[18] A. Frome, G. Cheung, A. Abdulkader, M. Zennaro, B. Wu, A. Bissacco, H. Adam, H. Neven, and L. Vincent, "Large-Scale Privacy Protection in Google Street View," Proc. IEEE Int'l Conf. Computer Vision, 2009.
[19] S.M. Bohte, J.N. Kok, and H. La Poutre, "Error-Backpropagation in Temporally Encoded Networks of Spiking Neurons," Neurocomputing, vol. 48, pp. 17-38, 2003.
[20] O. Booij et al., "A Gradient Descent Rule for Spiking Neurons Emitting Multiple Spikes," Information Processing Letters, vol. 95, no. 6, pp. 552-558, 2005.
[21] F. Ponulak and A. Kasinski, "Supervised Learning in Spiking Neural Networks with ReSuMe: Sequence Learning, Classification, and Spike Shifting," Neural Computation, vol. 22, no. 2, pp 467-510, Feb. 2010.
[22] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-Based Learning Applied to Document Recognition," Proc. IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998.
[23] C. Farabet, B. Martini, P. Akserod, S. Talay, Y. LeCun, and E. Culurciello, "Hardware Accelerated Convolutional Neural Networks for Synthetic Vision Systems," Proc. IEEE Int'l Symp. Circuits and Systems, pp. 257-260, 2010.
[24] http:/aerst.wiki.sourceforge.net, 2013.
[25] http:/jaer.wiki.sourceforge.net, 2013.
[26] T. Delbrück and P. Lichtsteiner, "Fast Sensory Motor Control Based on Event-Based Hybrid Neuromorphic-Procedural System," Proc. IEEE Int'l Symp. Circuits and Systems, pp. 845-848, 2007.
[27] S. Joshi, S. Deiss, M. Arnold, J. Park, T. Yu, and G. Cauwenberghs, "Scalable Event Routing in Hierarchical Neural Array Architecture with Global Synaptic Connectivity," Proc. Int'l Workshop Cellular Nanoscale Networks and Their Applications, Feb. 2010.
[28] L. Camuñas-Mesa, J.A. Pérez-Carrasco, C. Zamarreño-Ramos, T. Serrano-Gotarredona, and B. Linares-Barranco, "On Scalable Spiking ConvNet Hardware for Cortex-Like Visual Sensory Processing Systems," Proc. IEEE Int'l Symp. Circuits and Systems, pp. 249-252, June 2010.
[29] A. Linares-Barranco, R. Paz-Vicente, F. Gómez-Rodríguez, A. Jiménez, M. Rivas, G. Jiménez, and A. Civit, "On the AER Convolution Processors for FPGA," Proc. IEEE Int'l Symp. Circuits and Systems, pp. 4237-4240, June 2010.
[30] R. Silver, K. Boahen, S. Grillner, N. Kopell, and K.L. Olsen, "Neurotech for Neuroscience: Unifying Concepts, Organizing Principles, and Emerging Tools," J. Neuroscience, vol. 27, no. 44, pp. 11807-819, Oct. 2007.
[31] E. Culurciello, R. Etienne-Cummings, and K.A. Boahen, "A Biomorphic Digital Image Sensor," IEEE J. Solid-State Circuits, vol. 38, no. 2, pp. 281-294, Feb. 2003.
[32] S. Chen and A. Bermak, "Arbitrated Time-to-First Spike CMOS Image Sensor with On-Chip Histogram Equalization," IEEE Trans. VLSI Systems, vol. 15, no. 3, 346-357, Mar. 2007.
[33] M. Azadmehr, H. Abrahamsen, and P. Hafliger, "A Foveated AER Imager Chip," Proc. IEEE Int'l Symp. Circuits and Systems, vol. 3, pp. 2751-2754, 2005.
[34] J. Costas-Santos, T. Serrano-Gotarredona, R. Serrano-Gotarredona, and B. Linares-Barranco, "A Spatial Contrast Retina with On-Chip Calibration for Neuromorphic Spike-Based AER Vision Systems," IEEE Trans. Circuits and Systems, Part I, vol. 54, no. 7, pp. 1444-1458, July 2007.
[35] J.A. Leñero-Bardallo, T. Serrano-Gotarredona, and B. Linares-Barranco, "A Five-Decade Dynamic Range Ambient-Light-Independent Calibrated Signed-Spatial-Contrast AER Retina with 0.1ms Latency and Optional Time-to-First-Spike Mode," IEEE Trans. Circuits and Systems Part I, vol. 57, no. 10, pp. 2632-2643, Oct. 2010.
[36] K.A. Zaghloul and K. Boahen, "Optic Nerve Signals in a Neuromorphic Chip I: Outer and Inner Retina Models," IEEE Trans. Biomedical Eng., vol. 51, no. 4, pp. 657-666, Apr. 2004.
[37] K.A. Zaghloul and K. Boahen, "Optic Nerve Signals in a Neuromorphic Chip II: Testing and Results," IEEE Trans. Biomedical Eng., vol. 51, no. 4, pp. 667-675, Apr. 2004.
[38] J. Kramer, R. Sarpeshkar, and C. Koch, "Pulse-Based Analog VLSI Velocity Sensors," IEEE Trans. Circuits and Systems II, vol. 44, no. 2, pp. 86-101, Feb. 1997.
[39] C.M. Higgins and S.A. Shams, "A Biologically Inspired Modular VLSI System for Visual Measurement of Self-Motion," IEEE Sensors J., vol. 2, no. 6, pp. 508-528, Dec. 2002.
[40] C. Farabet, B. Martini, P. Akselrod, S. Talay, Y. LeCun, and E. Culurciello, "Hardware Accelerated Convolutional Neural Networks for Synthetic Vision Systems," Proc. IEEE Int'l Symp. Circuits and Systems, pp. 257-260, 2010.
[41] T. Masquelier, R. Guyonneau, and S. Thorpe, "Competitive STDP-Based Spike Pattern Learning," Neural Computation, vol. 21, pp. 1259-1276, 2009.
[42] G. Indiveri, "Neuromorphic Bistable VLSI Synapses with Spike-Timing-Dependent Plasticity," Proc. Advances in Neural Information Processing Systems, vol. 15, pp. 1091-1098, 2002.
[43] C. Farabet, B. Martini, B. Corda, P. Akselrod, E. Culurciello, and Y. LeCun, "NeuFlow: A Runtime-Reconfigurable Dataflow Processor for Vision," Proc. Embedded Computer Vision Workshop, 2011.
[44] C. Farabet, Y. LeCun, and E. Culurciello, "NeuFlow: A Runtime Reconfigurable Dataflow Architecture for Vision," Proc. Snowbird Learning Workshop, Apr. 2012.
[45] C. Zamarreño-Ramos, A. Linares-Barranco, T. Serrano-Gotarredona, and B. Linares-Barranco, "Multi-Casting Mesh AER: A Scalable Assembly Approach for Reconfigurable Neuromorphic Structured AER Systems. Application to ConvNets," IEEE Trans. Biomedical Circuits and Systems, vol. 7, no. 1, pp. 82-102, Feb. 2013.
[46] C. Farabet, R. Paz, J.A. Pérez-Carrasco, C. Zamarreño-Ramos, A. Linares-Barranco, Y. LeCun, E. Culurciello, T. Serrano-Gotarredona, and B. Linares-Barranco, "Comparison between Frame-Constraint Fix-Pixel-Value and Frame-Free Spiking Dynamic-Pixel ConvNets for Visual Processing," Frontiers in Neuromorphic Eng., vol. 6, Mar. 2012, doi: 10.3389/fnins.2012.00032.
[47] J. Poulton, R. Palmer, A.M. Fuller, T. Greer, J. Eyles, W.J. Dally, and M. Horowitz, "A 14-mW 6.25-Gb/s Transceiver in 90-nm CMOS," IEEE J. Solid-State Circuits, vol. 42, no. 12, pp. 2745-2757, Dec. 2007.
[48] C. Zamarreño-Ramos, T. Serrano-Gotarredona, and B. Linares-Barranco, "An Instant-Startup Jitter-Tolerant Manchester-Encoding Serializer/Deserializar Scheme for Event-Driven Bit-Serial LVDS Inter-Chip AER Links," IEEE Trans. Circuits and Systems Part I, vol. 58, no. 11, pp. 2647-2660, Nov. 2011.
[49] C. Zamarreño-Ramos, T. Serrano-Gotarredona, and B. Linares-Barranco, "A 0.35um Sub-ns Wake-Up Time ON-OFF Switchable LVDS Driver-Receiver Chip I/O Pad Pair for Rate-Dependent Power Saving in AER Bit-Serial Links," IEEE Trans. Biomedical Circuits and Systems, vol. 6, no. 5, pp. 489-497, Oct. 2012.
[50] W. Gerstner and W. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge Univ. Press, 2002.
[51] C. Zamarreño-Ramos, R. Kulkarni, J. Silva-Martínez, T. Serrano-Gotarredona, and B. Linares-Barranco, "A 1.5ns OFF/ON Switching-Time Voltage-Mode LVDS Driver/Receiver Pair for Asynchronous AER Bit-Serial Chip Grid Links with up to 40 Times Event-Rate Dependent Power Savings," IEEE Trans. Biomedical Circuits and Systems in press.
[52] S.B. Furber, D.R. Lester, L.A. Plana, J.D. Garside, E. Painkras, S. Temple, and A.D. Brown, "Overview of the SpiNNaker System Architecture," IEEE Trans. Computers, doi 10.1109/TC.2012.142, 2012.
[53] E. Painkras, L.A. Plana, J. Garside, S. Temple, F. Galluppi, C. Patterson, D.R. Lester, A.D. Brown, and S.B. Furber, "SpiNNaker: A 1-W 18-Core System-on-Chip for Massively-Parallel Neural Network Simulation," IEEE J. Solid-State Circuits, vol. 48, no. 8, pp. 1943-1953, Aug. 2013.
84 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool