<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Physics | Mahyar's world 🌏</title><link>https://mahyar-osanlouy.com/tag/physics/</link><atom:link href="https://mahyar-osanlouy.com/tag/physics/index.xml" rel="self" type="application/rss+xml"/><description>Physics</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Tue, 22 Apr 2025 00:00:00 +0000</lastBuildDate><item><title>The Limitations of Machine Learning in Replacing Physical Laws: Expanding the Critique</title><link>https://mahyar-osanlouy.com/post/fallacy-ml-physics/</link><pubDate>Tue, 22 Apr 2025 00:00:00 +0000</pubDate><guid>https://mahyar-osanlouy.com/post/fallacy-ml-physics/</guid><description>&lt;p>Before diving into my analysis, I want to acknowledge the insightful &lt;a href="https://science-memo.blogspot.com/2021/04/on-fallacy-of-replacing-physical-laws.html" target="_blank" rel="noopener">blog by Mehmet Süzen&lt;/a>
that inspired this post, which eloquently discusses the fallacy of replacing physical laws with machine-learned inference
systems. Having read it, I felt compelled to share my own perspective and expand on these critical arguments with
additional examples from recent literature and research.&lt;/p>
&lt;h2 id="the-fundamental-problem-of-circular-reasoning">The Fundamental Problem of Circular Reasoning&lt;/h2>
&lt;p>The original blog brilliantly identifies the circular reasoning inherent in claiming that machine learning systems can
discover or replace physical laws. This point deserves further emphasis: when a neural network is trained on data
generated by known physical principles, it cannot be said to &amp;ldquo;discover&amp;rdquo; those same principles through inference.&lt;/p>
&lt;p>Consider recent work in fluid dynamics, where physics-informed neural networks (PINNs) have gained popularity.
The paper &amp;ldquo;&lt;a href="https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2020.00025/full" target="_blank" rel="noopener">Discovery of Physics From Data: Universal Laws and Discrepancies&lt;/a>&amp;rdquo;
highlights that &amp;ldquo;the naive application of ML/AI will generally be insufficient to infer universal physical laws without
further modification&amp;rdquo;.
The authors demonstrate this by examining falling objects, showing that measurement noise and secondary mechanisms
(like fluid drag) obscure the underlying law of gravitation, leading to erroneous models that might suggest an
Aristotelian theory where objects fall at speeds related to their mass, rather than identifying the true universal
gravitational constant.&lt;/p>
&lt;p>This illustrates perfectly how ML systems trained on physical data will incorporate all the complexities and noise
present in that data, rather than abstracting to the elegant, universal laws that human scientists have carefully
identified through theoretical reasoning and controlled experimentation.&lt;/p>
&lt;h2 id="beyond-narrow-applications-the-generalization-problem">Beyond Narrow Applications: The Generalization Problem&lt;/h2>
&lt;p>The original blog correctly identifies the problem of faulty generalization. Machine learning algorithms excel at
computational acceleration within narrowly defined parameter spaces, but struggle with broader generalization.&lt;/p>
&lt;p>A fascinating discussion on &lt;a href="https://www.reddit.com/r/MachineLearning/comments/lvwt3l/d_some_interesting_observations_about_machine/" target="_blank" rel="noopener">Reddit&lt;/a>
highlights this limitation: &amp;ldquo;In addition, the &amp;lsquo;marginally-better SOTA&amp;rsquo;-esque papers with no novel methods or aspects
besides some parameter tuning or adding extra layers to the DNN are also tiring to read. The wall of math then
exists only to provide a sense of rigor and novelty, obscuring the iterative nature lacking novelty&amp;rdquo;.
This reflects how ML approaches in physics often claim breakthroughs that are actually just incremental improvements
in limited domains.&lt;/p>
&lt;p>Another illustrative example comes from the field of symbolic regression.
While the &lt;a href="https://journals.aps.org/prd/abstract/10.1103/PhysRevD.111.015022" target="_blank" rel="noopener">AbdusSalam et al. paper&lt;/a>
in Physical Review D demonstrates how symbolic regression can help derive analytical expressions for physics beyond the
Standard Model, the authors position it as a tool to assist numerical studies, not as a replacement for physical theory.
The expressions derived still rely on the underlying physics-based model (the constrained minimal supersymmetric
Standard Model) and serve primarily to accelerate computation, not to discover new physical laws.&lt;/p>
&lt;h2 id="the-irreplaceable-role-of-scientists-in-establishing-causality">The Irreplaceable Role of Scientists in Establishing Causality&lt;/h2>
&lt;p>Perhaps the most important point from the original blog is that causality still requires scientists. Machine learning
excels at finding correlations but struggles with identifying true causal relationships.&lt;/p>
&lt;p>The Amazon Science blog on physics-constrained machine learning notes that &amp;ldquo;the predictions of deep-learning models
trained on physical data typically ignore fundamental physical principles. Such models might, for instance, violate
system conservation laws&amp;rdquo; (see &lt;a href="https://www.amazon.science/blog/physics-constrained-machine-learning-for-scientific-computing" target="_blank" rel="noopener">here&lt;/a>).
This highlights why human scientists remain essential - they understand that physical laws must adhere to conservation principles,
symmetries, and other fundamental constraints that ML systems don&amp;rsquo;t inherently respect.&lt;/p>
&lt;p>A conversation on &lt;a href="https://www.reddit.com/r/MachineLearning/comments/18mnl9f/d_i_dont_understand_why_physics_informed_neural/" target="_blank" rel="noopener">Reddit&lt;/a> about Physics Informed Neural Networks (PINNs) further illuminates this issue.
One commenter precisely notes: &amp;ldquo;The point of including a physical loss function, in addition to a data-driven loss,
is to impose inductive bias into the training process&amp;rdquo;. This human-guided approach to incorporating physics into ML
demonstrates that we&amp;rsquo;re not replacing physics with ML, but rather using our understanding of physics to guide
ML - the exact opposite of what some overenthusiastic claims suggest.&lt;/p>
&lt;h2 id="the-scientific-machine-learning-fallacy-a-deeper-look">The Scientific Machine Learning Fallacy: A Deeper Look&lt;/h2>
&lt;p>The term &amp;ldquo;Scientific Machine Learning Fallacy&amp;rdquo; coined in the original blog deserves broader recognition.
Claims of &amp;ldquo;machine scientists&amp;rdquo; or &amp;ldquo;automated scientific discovery&amp;rdquo; fundamentally misunderstand the nature of
scientific inquiry.&lt;/p>
&lt;p>A recent &lt;a href="https://arxiv.org/abs/2403.02913" target="_blank" rel="noopener">paper&lt;/a> on &amp;ldquo;Scientific machine learning for closure models in multiscale problems&amp;rdquo; acknowledges that
&amp;ldquo;the generalizability and interpretability of learned models is a major issue that needs to be addressed further&amp;rdquo;.
This admission from researchers in the field underscores the gap between current ML capabilities and true scientific
discovery.&lt;/p>
&lt;p>The Conversation &lt;a href="https://theconversation.com/a-new-ai-scientist-can-write-science-papers-without-any-human-input-heres-why-thats-a-problem-237029" target="_blank" rel="noopener">article&lt;/a> about an &amp;ldquo;AI scientist&amp;rdquo; further reveals the limits of these approaches.
While Sakana AI Labs claims their system can &amp;ldquo;make scientific discoveries in the area of machine learning in a
fully automated way,&amp;rdquo; the article questions whether such a system can produce truly &amp;ldquo;interesting&amp;rdquo; scientific papers,
noting that &amp;ldquo;good science requires novelty&amp;rdquo;. The ability to generate papers that look like scientific literature
doesn&amp;rsquo;t equate to generating novel scientific insights or laws.&lt;/p>
&lt;h2 id="the-automl-misnomer-and-meta-scientific-work">The AutoML Misnomer and Meta-Scientific Work&lt;/h2>
&lt;p>I strongly agree with the original blog that &amp;ldquo;AutoML&amp;rdquo; is a misnomer in scientific contexts.
These systems don&amp;rsquo;t replace scientists but rather change the nature of scientific work.&lt;/p>
&lt;p>The &lt;a href="https://www.semanticscholar.org/paper/Combining-physical-modeling-and-machine-learning-of-Brus/41e8a7335a0541ac1cd41333c97b347b51220070" target="_blank" rel="noopener">paper&lt;/a> on &amp;ldquo;Combining physical modeling and machine learning for micro-scale modeling of a fuel cell electrode&amp;rdquo;
demonstrates this well. It describes a &amp;ldquo;comprehensive transition from white-box models, characterized by their
reliance on physical laws, to black-box models exemplified by neural networks&amp;rdquo;. Yet the core contribution isn&amp;rsquo;t
replacing physics but creating a &amp;ldquo;synergistic integration&amp;rdquo; where neural networks complement physical modeling.&lt;/p>
&lt;p>This represents what the original blog aptly calls &amp;ldquo;MetaML&amp;rdquo; - a transformation of scientific workflows rather than
a replacement of scientific thinking.&lt;/p>
&lt;h2 id="the-proper-role-augmentation-not-replacement">The Proper Role: Augmentation, Not Replacement&lt;/h2>
&lt;p>To conclude, I believe the most productive path forward is viewing machine learning as an augmentation to physical
sciences, not a replacement. The &lt;a href="https://www.semanticscholar.org/paper/Learning-physical-laws%3A-the-case-of-micron-size-in-Matei-Zhenirovskyy/02fad00443cb7f13834f19b69c225478f00602b1" target="_blank" rel="noopener">paper&lt;/a>
on &amp;ldquo;Learning physical laws: the case of micron size particles in dielectric fluid&amp;rdquo;
demonstrates this approach well, noting that &amp;ldquo;representation structure is key in learning generalizable models&amp;rdquo;.
The authors use &amp;ldquo;the port-Hamiltonian formalism as a high level model structure&amp;rdquo; that is
&amp;ldquo;continuously refined based on our understanding of the physical process.&amp;rdquo;
This integration of physics understanding with machine learning represents the right approach.&lt;/p>
&lt;p>Similarly, the &lt;a href="https://arxiv.org/abs/2402.16517" target="_blank" rel="noopener">work&lt;/a> on &amp;ldquo;Discovering Artificial Viscosity Models for Discontinuous Galerkin Approximation of
Conservation Laws&amp;rdquo; shows how physics-informed machine learning can automate the discovery of models - but within
a physics-informed framework, not replacing it&lt;/p>
&lt;p>In summary, while machine learning offers powerful tools for scientific research, the fallacy of replacing physical
laws with learned models deserves continued critical attention. True scientific progress will come from the thoughtful
integration of machine learning with physical understanding, not from claims that ML can autonomously discover or
replace the fundamental laws of nature. The original blog&amp;rsquo;s warning about circular reasoning, faulty generalization,
and the continued need for human scientists remains prescient and worthy of expansion as these technologies continue
to develop.&lt;/p>
&lt;h2 id="human-thinking-not-machine-imitation">Human Thinking, Not Machine Imitation&lt;/h2>
&lt;p>This philosophical depth reminds us that true scientific thinking involves not just pattern recognition and prediction,
but deep conceptual understanding that may not be reducible to computational processes. When we forget this,
we risk confusing the map (our mathematical models and computational simulations) with the territory
(physical reality itself).&lt;/p></description></item><item><title>The Hitchhiker’s Guide to Rebranding Machine Learning (And a Shoutout to Geoff Hinton!)</title><link>https://mahyar-osanlouy.com/post/ml-physics-nobel/</link><pubDate>Thu, 12 Dec 2024 00:00:00 +0000</pubDate><guid>https://mahyar-osanlouy.com/post/ml-physics-nobel/</guid><description>&lt;p>Ever feel like the world of machine learning is just a little too… techy?
Well, let&amp;rsquo;s give that jargon a fresh coat of paint; one with a physics twist.&lt;/p>
&lt;h3 id="welcome-to-my-hitchhikers-guide-to-rebranding-machine-learning-">Welcome to my Hitchhiker’s Guide to Rebranding Machine Learning! 🚀&lt;/h3>
&lt;p>Let’s face it: sometimes, explaining ML to your friends sounds like you’re reading from a sci-fi manual.
So, why not embrace it? Here’s my tongue-in-cheek translation table for the next time you want to sound like a
quantum physicist at a data science meetup:&lt;/p>
&lt;ul>
&lt;li>
&lt;h4 id="machine-learning-nah-lets-call-it-statistical-mechanics">Machine learning? Nah, let’s call it statistical mechanics.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="loss-function-thats-just-an-energy-functional">Loss function? That’s just an energy functional.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="optimize-the-model-were-really-just-minimizing-free-energy">Optimize the model? We’re really just minimizing free energy.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="trained-model-youve-simply-reached-equilibrium-distribution">Trained model? You’ve simply reached equilibrium distribution.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="kl-divergence-thats-the-free-energy-difference-obviously">KL divergence? That’s the free energy difference, obviously.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="gaussian-noise-those-are-just-random-thermal-fluctuations">Gaussian noise? Those are just random thermal fluctuations.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="random-step-welcome-to-brownian-motion">Random step? Welcome to Brownian motion.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="sgd-think-of-it-as-directional-brownian-motion">SGD? Think of it as directional Brownian motion.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="gpu-basically-a-simulated-particle-accelerator">GPU? Basically a simulated particle accelerator.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="diffusion-models-thats-langevin-dynamics-for-you">Diffusion models? That’s Langevin dynamics for you.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="llm-large-language-model-try-high-order-discrete-markov-chain-on-for-size">LLM (Large Language Model)? Try high-order discrete Markov chain on for size.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="nlp-lets-call-it-string-theory-because-why-not">NLP? Let’s call it &amp;ldquo;string theory&amp;rdquo; (because why not?).&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="reinforcement-learning-thats-just-control-theory">Reinforcement learning? That’s just control theory.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="robotics-its-all-physical-computation">Robotics? It’s all physical computation.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="audio-learning-1d-signal-processing">Audio learning? 1D signal processing.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="image-learning-2d-signal-processing">Image learning? 2D signal processing.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="video-learning-you-guessed-it-3d-signal-processing">Video learning? You guessed it: 3D signal processing.&lt;/h4>
&lt;/li>
&lt;li>
&lt;h4 id="multimodal-models-were-talking-multidimensional-signal-processing">Multimodal models? We’re talking multidimensional signal processing.&lt;/h4>
&lt;/li>
&lt;/ul>
&lt;p>You’re welcome. 😉&lt;/p>
&lt;h3 id="a-quick-geek-out-congrats-to-geoff-hinton">A Quick Geek-Out: Congrats to Geoff Hinton!&lt;/h3>
&lt;p>Before I wrap up, I have to give a huge shoutout to Geoff Hinton, who just became the second person ever to
win both the Turing Award and a Nobel Prize! (Herbert A. Simon was the first, winning in Economics.) Legends, both of them.&lt;/p>
&lt;p>So, the next time someone asks what you do,
feel free to say you’re working on &lt;strong>“minimizing free energy in high-order discrete Markov chains using
simulated particle accelerators.”&lt;/strong> If nothing else, you’ll sound like you’re saving the universe.&lt;/p></description></item></channel></rss>