Zat Rana

The beauty and the curse of human knowledge are that it often doesn’t have to be completely right to be useful. That’s why, if it works, it’s hard for us to see why and how it might be wrong.
For example, when Einstein finalized the Theory of General Relativity, it disproved a lot of Newton’s work. It painted a more accurate picture of what was actually going on. That said, it doesn’t mean that Newton’s laws aren’t still highly usable and relevant to most activities.
Over time, we get closer and closer to the truth by being less wrong. We will likely never be completely right in our ability to understand the world. There is way too much complexity.
There is a chance that even the Theory of General Relativity and our take on Evolution will one day be viewed as being as elementary as we now see some of Newton’s work.
Science is always wrong, and assigning boundaries to what we think we know is how we limit the possibility of an advancing future. It’s worth being careful about how you define truth.

One thought on “Zat Rana

  1. shinichi Post author

    Why Science Is Wrong

    by Zat Rana

    https://medium.com/personal-growth/why-science-is-wrong-b710270cbb9c

    In 1894, Albert Michelson predicted that there were no discoveries left to be made in physics.

    He’s remembered as the first American to win the Nobel Prize in the field, and he wasn’t the only one to think so. In fact, this wasn’t too uncommon of a view among scientists at the time.

    In the 500 years prior, spectacular advances had been made all around. Greats minds like Copernicus, Kepler, Galileo, Newton, Faraday, and Maxwell had inspired new paradigms, and it appeared that, suddenly, we had quite a precise foundation concerning the laws of nature.

    There was no doubt that we would continue to make progress, but it did appear that our calculations and theories were accurate enough that nothing substantial would occur.

    And then everything changed. About a decade after that prediction, in 1905, an unknown man working as a patent clerk in Switzerland published what we now know as the Annus mirabilis papers. They are among the four most influential scientific articles ever written by anyone.

    They answered questions we didn’t even realize we had, and they introduced many new ones.

    They completely warped our view of space, time, mass, and energy, and they would later go on to provide the foundation for many of the revolutionary ideas formulated during the next half-century. The seeds for the Theory of General Relativity and Quantum Mechanics — the two pillars of modern physics — were planted the day those papers made it into publication.

    Within a year, Albert Einstein had completely shifted our entire understanding of the universe.

    Everything Is an Approximation

    At any given point in history, the majority of people have thought that they had it figured out.

    By definition, if we label something a law or a theory, then we are assigning a boundary to our knowledge, and once this boundary becomes a part of our lives, and once it’s ingrained in us that this is what is true, it isn’t hard to see how we end up narrowing our assumptions.

    If you took somebody from the 17th century and told them that, one day, we would be able to fly, that space and time are basically interchangeable, and that a cell-phone can do what it can do, there is an extreme likelihood that they would not have taken you very seriously at all.

    The beauty and the curse of human knowledge are that it often doesn’t have to be completely right to be useful. That’s why, if it works, it’s hard for us to see why and how it might be wrong.

    For example, when Einstein finalized the Theory of General Relativity, it disproved a lot of Newton’s work. It painted a more accurate picture of what was actually going on. That said, it doesn’t mean that Newton’s laws aren’t still highly usable and relevant to most activities.

    Over time, we get closer and closer to the truth by being less wrong. We will likely never be completely right in our ability to understand the world. There is way too much complexity.

    There is a chance that even the Theory of General Relativity and our take on Evolution will one day be viewed as being as elementary as we now see some of Newton’s work.

    Science is always wrong, and assigning boundaries to what we think we know is how we limit the possibility of an advancing future. It’s worth being careful about how you define truth.

    The Limits of Laboratories

    Most of the time, the uncertainty of the scientific method is a strength. It’s how we self-correct.

    That said, outside of hard physics and chemistry, this same strength is also a vice. This is particularly the case when it comes to economics, psychology, and the behavioral sciences.

    These fields tend to observe behavior which is subjectively judged, and that leaves room for a lot of human error. In 2005, a Stanford professor, John Ioannidis, published a paper called Why Most Published Research Findings Are False, and one of the things it showed was that about 80 percent of small, non-randomized studies are later proven to be wrong.

    Given that most research falls into this category and that the media sensationalizes any study that produces a good headline, it’s pretty evident why this is a problem. In fact, more recently, a replication crisis has spread to many long-held views which are also being questioned.

    Even researchers have their own self-interests to look out for, and sometimes, even if they don’t, there are so many variables that can sway an observation one way or another that a single study on its own is a very loose metric to base a worldview on. Replicability matters.

    To add to that, there is another less-talked-about caveat that comes with most research.

    An experiment in a lab will never fully be able to recreate the conditions that arise in the complex and dynamic systems of the world. Reality is far messier than anything we can design.

    Many experiments are either conducted in closed systems that don’t reflect the world or they rely on faulty models of complex phenomenon. Much of academia still underestimates how slight differences in initial conditions can lead to massive deviations in outcome.

    Contrary to popular belief, science has its limitations, and we should be aware of them.

    All You Need to Know

    The scientific method is one of the most powerful tools humankind has ever invented.

    It has directly and indirectly been responsible for guiding the advances we have seen in technology, and it has arguably saved more lives than any other human mechanism to date.

    It’s a self-correcting process that has given us abilities that would have been treated like something out of a science-fiction movie only a few decades ago. The future we live in today is one that, throughout history, would have been inconceivable. We have come a long way.

    That said, the scientific method is only as useful as our understanding and comprehension of it. Like anything, if you don’t treat it within the correct domain, then it ceases to retain value.

    It’s essential, for example, to acknowledge that science is an approximation. Many of the laws and theories that we hold to be true could very well be proven quite wrong in the future. We are nowhere near the end of the road of discovery, and truth remains elusive.

    To add to that, outside of a few core science subjects, a lot of the research is relatively weak. It’s difficult to not let the element of human bias slip into our observations in psychology and the behavioral sciences, and we also have to be careful about how we interpret results.

    Using science to support and guide our efforts to better understand the world and ourselves is critical. It’s the best we have. That said, it’s important to look at the whole picture.

    Science is indeed wrong, but if we know how and why, we can use it to its full potential.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *