Researchers at the University of California, Santa Barbara have invented an end-to-end tensor-compressed technique for training PINNs on edge devices by combining tensor-train (TT) compressed model representation and a PINN to approximate the solutions of PDEs. This technology marks the first time that TT decomposition has been applied to PINNs with the result of drastically reducing the number of trainable parameters in each hidden layer from a large-scale weight matrix to multiple 3-way tensors (TT-cores) of small sizes, thus enabling the training of PINNs on edge devices. This technology, called TT-PINN, not only reduces the number of parameters to make the training affordable for edge devices, but also demonstrates the expressiveness power of a larger PINN. Experimental results show that TT-PINNs significantly outperform PINNs of similar or larger sizes with few parameters and achieve similarly accurate prediction with 15x smaller models. The network size reduction is expected to be much higher on large-size PINNs.
- Enables PINN training on edge devices by drastically reducing training requirements
- Demonstrates equivalent expressiveness power of a larger PINN
- Outperforms PINNs of similar or larger sizes and achieves similarly accurate predictions with 15x smaller models.
- AI and Neural Networks
- Medical Imaging
- Electronic Design Automation
- Fluid dynamics
- Digital twins
- Autonomous Systems
- Multi-agent robots
- Safety-aware learning-based verification
Name: Mary Raven