Reducing Computational Complexity of Training Algorithms for Artificial Neural Networks

Researchers at UCLA from the Department of Chemistry and Biochemistry have developed a novel mathematical theorem to rapidly train large-scale artificial neural networks (ANNs). Their algorithm prevents the exponential increase of computational cost wi…

Researchers at UCLA from the Department of Chemistry and Biochemistry have developed a novel mathematical theorem to rapidly train large-scale artificial neural networks (ANNs). Their algorithm prevents the exponential increase of computational cost with the size of the ANN. As a proof of concept, ANNs were trained on a variety of benchmark applications using steepest descent, standard second-order methods, other state-of-the-art methods, and this novel method.

Their algorithm was able to consistently perform training operations at least 10x faster than the other methods. The increased efficiency enables training networks with higher complexity and more neurons than currently possible with existing training algorithms and computational technology.

خلاصه

Researchers at UCLA have developed a novel mathematical theorem to revolutionize the training of large-scale artificial neural networks (ANN).

خلاصه

https://techtransfer.universityofcalifornia.edu/NCD/28860.html?utm_source=AUTMGTP&utm_medium=webpage&utm_term=ncdid_28860&utm_campaign=TechWebsites

مزایای

  • Faster training times (10x or more)
  • Reduced computational cost  
  • Can train larger and more complex artificial neural networks  
  • Can be used with current typical computing technology
  • Allows ANNs to become more widely applicable

برنامه های کاربردی بالقوه

Any application that uses artificial neural networks:

  • Face & speech recognition
  • Autonomous driving
  • Diagnostics using electronic biomedical data
  • Data mining
  • Biometric security
  • Financial forecasting
  • Predictive coding

اطلاعات تماس

نام: UCLA Technology Development Group

پست الکترونیک: ncd@tdg.ucla.edu

تلفن: 310.794.0558