Rapid Neural Network-based Autofocus Control for High-precision Imaging Systems

END-USE APPLICATIONS

  • High-speed microscope image acquisition
  • Real-time in-line inspection of manufactured micron- and nano-scale products
  • Tracking of fast-moving micron- and nano-scale targets

ADVANTAGES

  • Ultrafast high-precision autofocus
  • Reduced control complexity for high-precision imaging and vision systems
  • Lower hardware costs
  • Removes need for closed-loop controllers

TECHNOLOGY DESCRIPTION

As industry 4.0 pushes the limits of micro and nano-scale technologies, semiconductor, GPU, and robotics manufacturers are searching for ways to optimize their production lines while still maintaining the highest level of quality. Visual inspection of these advanced micro and nano-scale technologies requires remarkably high levels of precision and control. The piezoelectric actuators used for metrology are currently burdened by non-linearities that require slow and expensive internal closed-loop controllers to deliver sufficient precision to the imaging system. A UMass Amherst research team has developed a new control method that reduces the cost and complexity of high-precision imaging systems while still delivering rapid acquisition of clear and crisp images. The new method integrates the focus measurement and the troublesome non-linear effects in a single learning-based model. The method involves evaluating the focus from a short sequence of images in a deep learning-based control model to determine the optimal position for the lens. The technology leverages Long Short-Term Memory (LSTM) because of its superior ability to draw inferences from learned time sequence data. This novel method also utilizes an optimized backpropagation algorithm for efficiency, as well as a unique S-curve control input profile to minimize motor and image jerks. This method supports both rapid and stable dynamic lens transitions for a wide variety of imaging applications. Compared with the leading autofocus technologies, this method demonstrates significant advantages regarding autofocus time.  

ABOUT THE LEAD INVENTOR

Dr. Xian Du is an Assistant Professor in the Department of Mechanical and Industrial Engineering and Institute for Applied Life Sciences at the University of MassachusettsAmherst. His current research focuses on the innovation of high-resolution, large-area, and fast-speed machine vision and pattern recognition technologies for manufacturing and medical devices. His research interests include pattern recognition, intelligent imaging and vision, flexible electronics manufacturing, robotics, and medical device realization. Dr. Du was a recipient of the NSF CAREER award in 2020. He is a member of the Optics Society of America (OSA).

AVAILABILITY:

Available for Licensing and/or Sponsored Research

DOCKET:

UMA 22-056

PATENT STATUS:

Patent Pending

NON-CONFIDENTIAL INVENTION DISCLOSURE

CONTACT:

As industry 4.0 pushes the limits of micro and nano-scale technologies, semiconductor, GPU, and robotics manufacturers are searching for ways to optimize their production lines while still maintaining the highest level of quality. Visual inspection of these advanced micro and nano-scale technologies requires remarkably high levels of precision and control. The piezoelectric actuators used for metrology are currently burdened by non-linearities that require slow and expensive internal closed-loop controllers to deliver sufficient precision to the imaging system. A UMass Amherst research team has developed a new control method that reduces the cost and complexity of high-precision imaging systems while still delivering rapid acquisition of clear and crisp images. The new method integrates the focus measurement and the troublesome non-linear effects in a single learning-based model. The method involves evaluating the focus from a short sequence of images in a deep learning-based control model to determine the optimal position for the lens. The technology leverages Long Short-Term Memory (LSTM) because of its superior ability to draw inferences from learned time sequence data. This novel method also utilizes an optimized backpropagation algorithm for efficiency, as well as a unique S-curve control input profile to minimize motor and image jerks. This method supports both rapid and stable dynamic lens transitions for a wide variety of imaging applications. Compared with the leading autofocus technologies, this method demonstrates significant advantages regarding autofocus time.

Website

http://tto-umass-amherst.technologypublisher.com/tech/Rapid_Neural_Network-based_Autofocus_Control_for_High-precision_Imaging_Systems

Contact Information

TTO Home Page: http://tto-umass-amherst.technologypublisher.com

Name: Ling Shen

Title: Senior Licensing Officer

Department: Technology Transfer Office

Email: lxshen@research.umass.udu

Phone: 413-545-5276