Unraveling the Design Pattern of Physics-Informed Neural Networks: Part 07 | by Shuai Guo | Jul, 2023


Active learning for efficiently training parametric PINN

Shuai Guo

Towards Data Science

Photo by Scott Graham on Unsplash

Welcome to the 7th blog post of this series, where we continue our exciting journey of exploring design patterns of physics-informed neural networks (PINN)🙌

In this blog, we will take a closer look at a paper that introduces active learning to PINN. As usual, we will examine the paper through the lens of design pattern: we will start with the target problem, followed by introducing the proposed method. After that, we will discuss the evaluation procedure and the advantages/disadvantages of the proposed method. Finally, we will conclude the blog by exploring future opportunities.

As this series continues to expand, the collection of PINN design patterns grows even richer! Here’s a sneak peek at what awaits you:

PINN design pattern 01: Optimizing the residual point distribution

PINN design pattern 02: Dynamic solution interval expansion

PINN design pattern 03: Training PINN with gradient boosting

PINN design pattern 04: Gradient-enhanced PINN learning

PINN design pattern 05: Automated hyperparameter tuning

PINN design pattern 06: Causal PINN training

Let’s dive in!

  • Title: Active training of physics-informed neural networks to aggregate and interpolate parametric solutions to the Navier-Stokes equations
  • Authors: C. A., Arthurs, A. P. King
  • Institutes: King’s College London
  • Link: Journal of Computational Physics

2.1 Problem 🎯

One of the prime uses of PINNs is to surrogate high-fidelity, time-consuming numerical simulations (e.g., FEM simulations for structural dynamics). Thanks to the strong regularizations enforced by the known governing differential equations (represented as an extra loss term), PINNs’ training typically only requires minimal data gathered from just a handful of simulation runs.



Source link

Leave a Comment