New data processing engine makes deep neural networks smarter

Artificial intelligence researchers at North Carolina State University (NC State) have improved the performance of deep neural networks by combining function normalization and function attention modules into a single module they call Attentive Normalization (AN). The hybrid module dramatically improves the accuracy of the system while using little additional processing power. Attentive Normalization was presented at the European Conference on Computer Vision (ECCV). The article was co-authored by Xilai Li, Ph.D. and NC State alumnus, and Wei Sun, Ph.D., and NC State student.

“Performance normalization is an important element of deep neural network training, and attention to function is equally important in helping networks highlight which functions extracted from the raw data are most important for performing tasks,” explains Tianfu Wu, assistant professor of electrical engineering. and Computer Science at NC State. “But mostly they were processed separately. We found that bringing them together made them more effective and efficient. ”

To test their AN module, the researchers plugged it into four of the most widely used neural network architectures: ResNets, DenseNets, MobileNetsV2, and AOGNets. They then tested the networks against standard two industry metrics: ImageNet-1000 classification test and object detection and instance segmentation 2017 MS-COCO test.

“We found that AN improved performance for all four architectures in both benchmarks,” Wu said. “For example, the Top-1 accuracy in ImageNet-1000 improved by 0.5-2.7%. The average precision (AP) accuracy increased to 1.8% for the bounding box and 2.2% for the semantic mask in MS-COCO. Another advantage of AN is that it facilitates the better transfer of learning between different domains. For example, from image classification in ImageNet to object detection and semantic segmentation in MS-COCO. This is illustrated by the performance improvement in the MS-COCO benchmark, which was obtained by fine-tuning the deep neural networks previously trained by ImageNet at MS-COCO. ”

“We have released the source code and hope that our AN will lead to a better integrative design for deep neural networks,” the scientists conclude.

If you have found a spelling error, please, notify us by selecting that text and pressing Ctrl+Enter.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Alexandr Ivanov earned his Licentiate Engineer in Systems and Computer Engineering from the Free International University of Moldova. Since 2013, Alexandr has been working as a freelance web programmer.
Function: Web Developer and Editor
E-mail: except.freenews@gmail.com
Alexandr Ivanov

Spelling error report

The following text will be sent to our editors: