KT-HardNet Officially Released as Python Package

🔔 𝐄𝐱𝐜𝐢𝐭𝐢𝐧𝐠 𝐔𝐩𝐝𝐚𝐭𝐞𝐬 𝐨𝐧 “𝐊𝐊𝐓-𝐇𝐚𝐫𝐝𝐍𝐞𝐭” 🔔 𝐊𝐊𝐓-𝐇𝐚𝐫𝐝𝐍𝐞𝐭 —a general physics-constrained ML tool that combines data and domain knowledge for scientific machine learning (SciML)—is now officially available as a Python package. We’ve significantly upgraded the framework with CUDA support and a more modular problem construction pipeline, making it faster and easier to use than before. 𝑲𝒆𝒚 𝒊𝒎𝒑𝒓𝒐𝒗𝒆𝒎𝒆𝒏𝒕𝒔: - 📈 𝘐𝘮𝘱𝘳𝘰𝘷𝘦𝘥 𝘱𝘳𝘦𝘥𝘪𝘤𝘵𝘪𝘰𝘯 𝘢𝘤𝘤𝘶𝘳𝘢𝘤𝘺 𝘢𝘯𝘥 𝘪𝘯𝘧𝘦𝘳𝘦𝘯𝘤𝘦 𝘵𝘪𝘮𝘦 - ⏱️ 𝘍𝘢𝘴𝘵𝘦𝘳 𝘢𝘯𝘥 𝘮𝘰𝘳𝘦 𝘦𝘧𝘧𝘪𝘤𝘪𝘦𝘯𝘵 𝘵𝘳𝘢𝘪𝘯𝘪𝘯𝘨 𝘰𝘯 𝘣𝘰𝘵𝘩 𝘊𝘗𝘜 𝘢𝘯𝘥 𝘎𝘗𝘜 - 🧩 𝘔𝘰𝘥𝘶𝘭𝘢𝘳 𝘥𝘦𝘴𝘪𝘨𝘯 𝘧𝘰𝘳 𝘧𝘭𝘦𝘹𝘪𝘣𝘭𝘦 𝘱𝘳𝘰𝘣𝘭𝘦𝘮 𝘴𝘦𝘵𝘶𝘱 𝑰𝒇 𝒚𝒐𝒖’𝒗𝒆 𝒃𝒆𝒆𝒏 𝒖𝒔𝒊𝒏𝒈 𝒆𝒂𝒓𝒍𝒊𝒆𝒓 𝒗𝒆𝒓𝒔𝒊𝒐𝒏𝒔, 𝒘𝒆 𝒔𝒕𝒓𝒐𝒏𝒈𝒍𝒚 𝒓𝒆𝒄𝒐𝒎𝒎𝒆𝒏𝒅 𝒔𝒘𝒊𝒕𝒄𝒉𝒊𝒏𝒈 𝒕𝒐 𝒕𝒉𝒊𝒔 𝒐𝒑𝒕𝒊𝒎𝒊𝒛𝒆𝒅 𝒊𝒎𝒑𝒍𝒆𝒎𝒆𝒏𝒕𝒂𝒕𝒊𝒐𝒏. 📄 Paper: https://lnkd.in/gD7p7G6Z 💻 Code: https://lnkd.in/gzqrEVgf 📦 Package: https://lnkd.in/gNrDxF3t ⚙️ Install via pip: CPU: 𝘱𝘪𝘱 𝘪𝘯𝘴𝘵𝘢𝘭𝘭 𝘬𝘬𝘵-𝘩𝘢𝘳𝘥𝘯𝘦𝘵 GPU (CUDA 12): 𝘱𝘪𝘱 𝘪𝘯𝘴𝘵𝘢𝘭𝘭 "𝘬𝘬𝘵-𝘩𝘢𝘳𝘥𝘯𝘦𝘵[𝘤𝘶𝘥𝘢12]" We’d love to hear your feedback. Feel free to reach out with questions or thoughts on the documentation and examples. Bimol Nath Roy Rahul Golder Ashfaq Iftakher #PhysicsConstrainedMachineLearning #PCML #ConstrainedLearning #MachineLearning #Optimization #DeepLearning #JAX #CUDA #Research #AI #Engineering #PSE

  • No alternative text description for this image

This is interesting. I developed something similiar for SAMSUNG called PEGRANN (physics-enforced graph neural networks) which are low-fidelity surrogate models trained from high-fidelity physics based simulators. This is mainly used for digitial twins of thermo-fluid systems, where despite the loss of fidelity, mass, energy and momentum balances are still enforced by affine projecting the Graph NN outputs to a linear system Ay=b . For example, mass balance constraint for a single tee-junction can be expressed as [1, 1, 1] x [port_a; port_b; port_c] = [0]. However, I found for very large systems (size of A matrix is 50000 x 45000), solving the projection math becomes very difficult due to closed loops in the physical system that translate to rank deficiency. Finding and addressing each singularity is simply not practical. Using larger Tikhonov regularization relaxes this somewhat, but as a consequence, the physics constraints are no longer "strictly" enforcing. I also got better results when I trained my PEGRANNs in two stages: First unconstrained, then with the affine projections. Stage 2 training usually converges to a lower loss compared to training in a single stage with the physics constraints. I would love to discuss more.

See more comments

To view or add a comment, sign in

Explore content categories