The Riemann Hypothesis, String Theory, and the Scientific/Mathematical obsession with absolute truth

If I tried to submit a paper like this to an actual scientific journal, I would be laughed out of the room, probably for not following protocol, secondly for not being one of them. However this is how it looks to a fledgling student of “science studies” to look at the debates currently going on in physics and mathematics and to be dumbfounded that ideas and theories that are entirely plausible to still receive some skepticism because they aren’t  proven “indubitably and undeniably true”. Let’s enter into a discussion about scientific convention through the process of analyzing two current scientific debates, one in physics and one in mathematics- string theory and the Riemann hypothesis.

Now I’ve already done some stuff on string theory debate previously (see my earlier posts), so let’s tackle the Riemann hypothesis. A brief synopsis:

The Riemann hypothesis, called by some the “most important unsolved problem in mathematics” is about a function involving complex numbers with real and imaginary parts, called the Riemann zeta function. Bernard Riemann was one of the most famous mathematicians who basically invented the geometry of higher dimensional spaces (non-Euclidean geometry) that laid the foundation for Einstein’s general relativity. So if what Riemann said about the function is correct, that all the non-trivial zero values of the function lie on a critical line 1/2+i(t) and not outside that line, then the hypothesis is correct, and the function can be used to predict the distribution of prime numbers. It is the distribution of primes that makes this function useful and interesting, not that it is evaluated using real and imaginary numbers, that is a ubiquitous feature of complex analysis. The function itself is interesting in itself, in that it is a convergent series. I’ll show you what I mean:

{\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}={\frac {1}{1^{s}}}+{\frac {1}{2^{s}}}+{\frac {1}{3^{s}}}+\cdots }

This is the actual Riemann Zeta function. So if you plug in 2 for s, you get 1/1^2 +1/2^2+1+3^2, etc. I.e you get the perfectly understandable 1+1/4+1/9…i.e the sum of the reciprocals of the squares. Here’s the interesting part (there are actually many interesting parts of the Riemann Zeta function, and you can go down the rabbit hole with how interesting it is). At s=2, the answer converges to something astonishing- \zeta (2)=1+{\frac  {1}{2^{2}}}+{\frac  {1}{3^{2}}}+\cdots ={\frac  {\pi ^{2}}{6}}=1.6449\dots \!

pi^2/6!! Where did pi come from? What does this function have to do with a circle? And that’s not all!

So to recap, the function is just a function, it exists by definition. What Riemann’s hypothesis is is that all the zeros of the function (except the ones on the x-axis) are on a critical vertical line when using complex numbers and imaginary numbers are given by the y dimension.

Ok so what does all this math mean? It means that Riemann came up with an astonishing theory that shines a light on something fundamental in mathematics- prime numbers, which seem to be randomly distributed, but actually according to this theory can be predicted using a formula. The only problem? The hypothesis isn’t proven.

Why? That’s a good question I’m still trying to figure out. The best we’ve come up with is we haven’t been able to disprove it. There is no logical/mathematical proof as of yet of the Riemann hypothesis. The theory rests on the fact that there are no zeros outside a certain line- despite having found BILLIONS of zeros (actually over 10 trillion) using supercomputers- and they are all on the line so far!!

So mathematicians, even when there are “probabilistic proofs” of the theory (given by Denjoy) still have to say “the jury is out” because math is not a science of probability, but requires, in the words of basically every mathematician, “absolute knowledge”. Things like the weak Goldbach conjecture, which were “first proved using the generalized Riemann hypothesis” were also later proved unconditionally true, but this is too indirect a line of evidence for mathematical minds.

My argument to the mathematicians- if we can’t do brute calculations to infinity, no matter how many computers we have, maybe its time to call a spade a spade. The “consensus of survey articles” is that it is probably true. That sounds good enough to me.

Maybe mathematics should take a page from quantum theory and accept imperfect knowledge a la the uncertainty principle. I’ve watched many talks now about how the cutting edge of mathematics is coming from physics. It sounds like mathematics needs to import some of that physics “can-do” mindset and drop the Platonism.

In addition, when it comes to string theory, “indirect” evidence for string theory is also very strong, almost implied by some observed phenomena about particle physics. It doesn’t take a genius to understand this- even a layman can understand this and sift through the morass to find the answers. String theory is probably correct, so is the Riemann hypothesis. I’d bet money on it.

A definitive proof of the Riemann hypothesis gets the mathematical prize of 1 million dollars from a certain institute. This is proof of the value, the actual monetary value, placed on absolute proof in the field of mathematics. In a sense, it is what everything in it is based on- geometric proofs for example, or just the simple fact that you get one answer to a math problem. 2+3=5 dammit, and nothing else! Now, you can get two values for a particular equation, but that equation still has One answer. But what I’m saying doesn’t contradict this grade school logic. All I’m asking is that alternative lines of proof, including mathematically rigorous lines of evidence, from a “probabilistic” perspective, be given credit. It seems always that in these debates, something is left unsaid to the general public. What is left out here for the RH is “its basically been proven already”. What’s been left out for string theory is that “we already have a theory of quantum gravity, it doesn’t even require string theory, etc.” More on that later.

For now, just realize that these “definitive proofs” that we lack of unsolved problems in physics and mathematics have many dimensions to them. It is more indicative to me of a cultural issue, an obsession with Absolute Truth, and not being satisfied with relative truth. Maybe we can go ahead and say that relatively, the Riemann hypothesis should be assumed to be correct. We already know that primes aren’t randomly distributed- they make spirals and diagonal lines when you chart them:

Image result for prime spirals

The black dots represent the primes. If this doesn’t represent proof that they aren’t random, then call me Ishmael.

Edit: sorry for the typos before, I wrote this late at night


Capitalism and Schizophrenia: a book for our century

THE text, the seminal text, written about our society, industrial and digital society, modern society, especially American society, in the 20th century, is not the Postmodern Condition. It is not Dialectic of Enlightenment. It is not even One Dimensional Man, or Empire by Antonio Negri. The seminal text, the Das Kapital of the 20th century, is not Civilization and its Discontents. That is THE text to be opposed. It is Capitalism and Schizophrenia by French philosopher Gilles Deleuze. Philosopher Michel Foucault agreed, “the 21st century will be the Deleuzian century”.

In an era of mass incarceration, of mass shootings, of one day of depressing news after another with no end in sight, why do we need to read a book written by an eccentric French philosopher with unkempt hair? Why not, as Marxists argue, should we not continue to comb the archives of a communism that failed? Why not Lenin? Well, we should read Lenin. We should read everything else I mentioned as well, and not be exclusive. Because at the heart, Deleuze is the philosopher of addition, not of subtraction or opposition, which is the dialectical logic. Are “dialectics” (whatever that means) still useful? They are one tool in our arsenal, our assault against the forces of oppression, which has begun to tear away at the fabric of social reality itself, and has begun to convince people that their servitude is their only chance at freedom. Deleuze offers us a theory for thinking the Whole, for rethinking what Being itself is, that offers us also a pragmatics of the possible.

Those who scoff at the idea of revolution will always tell you- “the banks are too big to fail”, “not in my lifetime will that happen”. They have accepted life as it is- for how can they not? They don’t want to live a dreamlike reality. Deleuze takes the schizophrenic, who already lives in a dreamlike reality, the crazy, as the model of the subversive, as the model of a person who imminently fights against the capitalist model of a lifetime of servitude to work, because they truly have no other choice. Deleuze proposes that instead of going to “resolve your problems”, going to therapy or on the psychoanalyst’s couch will only put a band-aid on a problem that emanates fundamentally from the social field. This is what Freud missed, and this is why Deleuze titles the first book of Capitalism and Schizophrenia Anti-Oedipus. 

Deleuze starts by talking about schizophrenia in the abstract, and moves on to conceive of how we could remake the whole “socius”- we need a psychoanalysis of the social itself, society itself needs to go to therapy! Who would be the psychoanalyst for that? Well, we, each other, would be. Schizoanalysis is born- we realize we are all crazy together, and we can do something about it. Because before the father beats the son, the father is beaten by the system every day at work, by the boss, by the check out line, by the debt collectors, by his whole life. This is no justification of the way things are- it is a way to get out of the moralism that traditionally plagues psychology, the moralism attached to what are essentially just the norms of bourgeois society. And as Foucault and Deleuze realized, though Marx is the towering thinker of the 19th century, he still swims like a fish in 19th century water. The 21st century is here, Mao is dead, Marx is dead, and we have to carry on, with only the record and ghostly trails left in the tracks.

How do we make sense of reality, Deleuze claims, when one is constantly fed information by way of digital feedback? When the advertisers know exactly what to show you at any given time on your computer screen, how do you escape from the digital prison? Deleuze not only shows, like Guy Debord in the Society of the Spectacle, that modern subjectivity is formed by the media, but how they form an inseparable whole, how they together form a person-media-technology assemblage. Why is this important? Because it portrays reality the way it actually is, where two things that look separate actually are not. Deleuze’s conclusions start out looking like common sense, but put together, they challenge the fundamental wisdom of Capitalism. Put together, like the main character in They Live, we are able to see through the prison of how we were conditioned to see the world, and a whole new world of potentiality emerges, almost visible, sometimes perceptible on the margins.

Deleuze continues his saga in A Thousand Plateaus, considered in postmodern philosophy a conceptual breakthrough and a work of towering genius. It must be said at this junction that Deleuze co-wrote the book with anti-psychiatric radical Felix Guatarri, to whom I have not given enough credit. The only reason Deleuze has been marked here as the genius is because we are pre-programmed to prefer the work of One over many, even though there have been many productive intellectual pairs- Watson and Crick, Marx and Engels, Sartre and Beauvoir, and not enough women pairs. Deleuze is the philosopher of addition, of multiplication- we need new assemblages, new becomings. It is here that Deleuze elaborates on the concept of becoming. This may be too much of a poetic concept for those hardened political and social thinkers who are concerned about the value of the GDP, materialism. But Deleuze is also a materialist, and becoming is essentially material. There is materially a becoming that we experience concretely when we take a walk in the woods, we become more like the animals themselves. We don’t want to be told anymore about logic and maxims and means of production- feeling matters to us. We don’t want to be told our culture doesn’t matter, our traditions don’t matter- they do, and capitalism has been stripping them away from us, and we are continually alienated from ourselves. Of course there are material processes going on, and I could quote statistics about world hunger. But I could also tell you about the story of one woman who contracted AIDS from a soldier in Haiti she thought loved her, and before that worked as a maid in the capital city for pennies while her rich clients engorged themselves. That is what we aim to do! Our suffering is real, and also our happiness!

This is why Deleuze and Guatarri are important to me, and important to the world, and self-respecting “experts” on Deleuze should not look to his more “rigorous” books on Kant and Hume as examples of his “true genius”, for they are perpetuating the logocentric model of thinking that has driven people away from academia, from thinking, as yet another example of “disconnected elitism”. Philosophy can matter to our lives, so much so that can shape history. D&G can shape history, and should, or we may face even darker times

Lyotard, Science, and Kurt Godel

In a passage in the Postmodern Condition on the way science legitimates itself, in the sense of obtaining the addressee’s assent, I came across this:

The following question is more pertinent to legitimation : By what
criteria does the logician define the properties required of an axiomatic?
Is there a model for scientific languages? If so , is there just one ?
Is it verifiable ? The properties generally required of the syntax of
a formal system are consistency (for example , a system inconsistent
with respect to negation would admit both a proposition and its
opposite), syntactic completeness (the system would lose its consistency
if an axiom were added to it), decidability (there must be an
effective procedure for deciding whether a given proposition belongs
to the system or not), and the independence of the axioms in relation
to one another. Now Godel has effectively established the
existence in the arithmetic system of a proposition that is neither
demonstrable nor refutable within that system ; this entails that the
arithmetic system fails to satisfy the condition of completeness. 
Since it is possible to generalize this situation , it must be accepted
that all formal systems have internal limitations. This applies to
logic: the metalanguage it uses to describe an artificial (axiomatic)
language is “natural ” or “everyday ” language ; that language is universal, since all other languages can be translated into it, but it is not
consistent with respect to negation – it allows the formation of paradoxes”

It seems I’ve been beaten to the punchline by Lyotard in assessing how Godel’s incompleteness theorem could apply to philosophy. I generally agree with Lyotard, but think I can offer something here. I think we have to clarify what Lyotard and Godel mean in order to avoid mistakes in interpretation, namely to correct the interpretation of Lyotard that all interpretations are equal and its all just “language games and metanarratives”. The term postmodernism in the title doesn’t help at all in this regard…but lets start with Godel.

If one were to apply the analogy (and Lyotard is here playing a bit of a language game himself) directly from Godel’s incompleteness theorem, one has to recognize that any formal system of logic, or at least the formal system of the rules of arithmetic logic, are incomplete by the fact in every formal arithmetic system that attempts to find the “basic axioms”, there can be found an axiom which agrees with the other axioms, i.e it isn’t refutable, but it can’t be derived from the foundational axioms. Therefore, there are missing rules. In fact, there might be an infinite number of basic arithmetic rules. Examples of these axioms are propositions like a=a, a+b=b+a, etc.

This should NOT be confused with the idea that all metalanguages or systems are fundamentally INCONSISTENT. Merely incomplete- as Lyotard says, this means that a set of axioms can prove or entail a certain number of other conclusions, but each theory has a “scope” or set of limits. This is so basic a proposition as to merit being common sense. For instance, general relativity is an accepted theory, it is viewed as legitimate science, certain calculations can be done on the basis of general relativity, but it has certain limitations at certain energy levels, special physical situations, etc. In fact, science often progresses by questioning the limitations of an existing theory and then creating a new theory that explains those previously unexplained phenomena.

So my grade- does Lyotard pass? Exceptionally well. He does a good job of translating Godel’s theory into basic language. But I’d give him a B, because we can derive more from Godel- Godel’s incompleteness theorem tells us even more about systems of knowledge. Moreover, Lyotard switches registers with regards to going into the realm of logic itself- do you notice? “All other languages can be translated into it, but it is not consistent with respect to negation”. To me this doesn’t necessarily follow from Godel. Even going with him that Godel’s theorem can be abstracted to all logical systems, incompleteness is NOT inconsistency!!! This may be some of the source of giving Lyotard a bad rap as a relativist.

It is true that science has to use the language of common sense, of discourse itself, to legitimate itself. Using the analogy of Godel can also be profitable. But a precise reading of what Godel’s incompleteness theorem actually is requires scientific literacy, requires (to make things even more complicated) not only an in depth knowledge of a concept, but also the intellectual tools to understand it. How’s that for an epistemological whopper!

Lyotard gets things 3/4 right, but opens himself up to the critique of pure relativism, instead of stating outright what he really is- a social constructivist. And one has to be an idiot to not be a social constructivist, because of course science is made by man. The question is whether the Knowledge (capital K) man has is fallible. And….it depends on the circumstance! And of course its incomplete!

Of course scientific research in this regard plays the political game, and it forms the kind of pursuits and questions they are after. But the “paradox” is that the science produced in the meantime as a result is not untrue- atomic physics is all too real. It would be better if we take what we need from Lyotard et. al. as social theorists, productive concepts like postmodern informational society, and use them to enter into a dialogue on what exactly is going on now. Zizek uses the critique that postmodern digital society concept actually can obscure the dynamics of capitalism- yes but maybe only the appropriation of this idea, not within the confines of Lyotard, where he is definitely employing Marxian analysis of social phenomena. Take this passage from the same chapter on the “Pragmatics of Science”:

“—-what happened at the end of the eighteenth century , with the first
industrial revolution , is that the reciprocal of this equation was discovered: no technology without wealth , but no wealth without technology. A technical apparatus requires an investment; but since it optimizes the efficiency of the task to which it is applied, it also optimizes the surplus-value derived from this improved performance.
All th at is needed is for the surplus-value to be realized , in other
words, for the product of the task performed to be sold . And the system
can be sealed in the following way : a portion of the sale is
recycled into a research fund dedicated to further performance
improvement. It is at this precise moment that science becomes a
force of production , in other words, a moment in the circulation of

It was more the desire for wealth than the desire for knowledge
that initially forced upon technology the imperative of performance
improvement and product realization. The “organic” connection
between technology and profit preceded its union with science.
Techn ology became important to contemporary knowledge only
through the mediation of a generalized spirit of performativity. Even
today, progress in knowledge is not totally subordinated to technological investment.” 

The obvious employment of terms like surplus-value means that Lyotard is faithful to the Marxist cause of identifying science as a force of production, as Marx elaborates on in the German Ideology. Lyotard takes this history of science and abstracts it to the development of technology and capitalism itself. He demonstrates that science was not always synonymous with technological advancement- today those two are intimately connected, due to the development of research institutions, which he goes into. Lyotard’s critique here seems to be that while there are some “pure research institutions”, the goal is not knowledge as such, but technological advancement for profit.

Of course, one could apply this critique to the field of medical research in pharmaceuticals, for example, with great effect, and that would have to be done in detail and in the concrete. Some anthropologists have done things to that effect, looking at the Monsanto Corporation and how research on safety is rushed in order to put out a product in as quick a time as possible, specifically for fertilizers, pesticides, etc. I.e things that are necessary for the survival of the human species at the moment. But what does this level of abstraction miss?

It misses the fact that in the politics of Science in the abstract, there are many fields, many different social interactions. Yes there is one principle axiomatic that has corrupted this field, that has been correctly identified by Lyotard, and that is the axiom of Capital, of gaining power and notoriety, but often what comes out of this is not only the commodification of knowledge, as Lyotard has so presciently realized is happening, but also the creation of new real powerful technologies. This is why I believe that instead of postmodern informational society, media society, even society of the spectacle, or bureaucratic society of controlled consumption (as Jameson has identified as parallels for Lyotard’s concepts), what is the society of today? It is the digital society of control. This is the immanent, concrete way we should talk about the world, because you know, in your heart, that these mechanisms of control operate. They will find you if you don’t pay your credit card bill- believe me, they will find you. The knowledge of how that credit card operates is well known, well defined, almost perfect. There is a margin of error, things that can be exploited, but in objective, solid (or should I say fluid) reality. So is there such a thing as “objective reality”? Again, I want to make this perfectly clear- it depends on your definition!! If your definition of objective reality is something separate from human consciousness, something inherently stable, then no. If it means that reality simply exists, then yes. Of course there is reality! And as we are beginning to show in quantum physics, that reality is no different than virtual reality. It’s one big show, and the advertisers know just how to manipulate it.

Have a happy Christmas season, and remember, Never Work, fuck Trump, and don’t buy beyond necessity.

P.S: Do I even need to mention how horrible Trump’s tax plan is? The bank robbery has begun

Physics in modern culture: more thoughts on the String Theory debate

It seems that every mathematician or physicist out there, whether they are working in string theory or not, has an opinion about the subject of string theory. My review of the debate currently surrounding string theory has revealed that there is an ongoing heated debate in the physics community that is revolving around a philosophical debate, that is being characterized as a debate between empiricism vs. rationalism. I will make the case here that regardless of what is actually true about this famous “Theory of Everything”, this debate has not made things more clear, but actually shrouded everything in a kind of conceptual haze that distracts from the details of the actual debate (of which I am NOT an expert).

Here is what I can say as a non-expert that I find fascinating:

The Large Hadron Collider, the famous particle accelerator, has continued to turn up nada for a proof of super-symmetry, one of the necessary predictions of string theory, and continued to make observations consistent with the Standard Model. This is fascinating because on its face value, theoretically, the Standard Model does have weaknesses, recognized weaknesses, most famously that it isn’t compatible with general relativity. String theory emerged not as a makeshift candidate for how to address these problems, but in trying to work out the problems of their own accord. So what does that mean? It means that in all the headlines you read about how “string theory may finally be killed!” it actually hasn’t definitively been killed or not killed. There has been no evidence found against it. Strangely enough, string theory has weaknesses as well, like preferring a cosmological constant that is negative or 0 (we know that the cosmological constant is in fact positive).

So it seems like we have an answer- that the Standard Model is incomplete, but string theory won’t do the job. The strangeness of this is that we really don’t have any other alternatives. Great scientists doing work in theoretical physics have looked at string theory and marveled at its elegance and how it gets rid of certain problems with quantum field theory. So it seems like everywhere we turn, we find more contradictions.

What strikes me as odd in this whole debate is that pop scientists and professional mathematicians who aren’t string theorists love to hate on string theory because of lack of evidence, but there is never any questioning of the methodology of the experiments. It seems to me that there is more room for human error in an experiment as complex as the LHC.

Let’s take a closer look at the Large Hadron Collider, which has famously announced the existence of the “God Particle”, or the Higgs Boson/field that gives particles mass.

The Large Hadron Collider, the largest single machine in the world, and therefore ever built, sits on the France-Switzerland border. Data from the LHC is analyzed by 170 computing centers in 42 countries, according to Wikipedia. Now, I’m not a math scientist [laughs], but that seems like a lot of room for potential error. At any given time, the beam pipe of the LHC needs to be almost a complete vacuum, with only the amount of hydrogen that can fit in a grain of sand allowed. With a total operating budget of $1 billion per year, and a construction cost of around $7.5 billion, the LHC has been more the object of marvel than of scientific scrutiny in the popular press.

Famously, public concern over the safety of the LHC, whether it could produce a black hole, etc. was met with scoffs and simple dismissals. Of course, it can be reasonably assumed that after years of operation, the real dangers of the LHC are not on the order of a “doomsday weapon” like in the Dan Brown novel Angels and Demons, but it is fascinating to me how it seems like the single largest machine ever constructed by mankind seems to be escaping scrutiny. The best demonstration of this I can find is how the LHC is described by “Rational Wiki”, the kind of site that is frequented by Sam Harris lovers and “sciencephiles”. The description of the LHC on Rational Wiki is “the LHC is a kick-ass piece of scientific designed to replicate conditions immediately after the Big Bang…” etc. Now, it seems to me that the term kick-ass is not very rational or logical, it seems pretty emotional to me. But of course, the defenders of Rationality will always be right by definition!

There is absolutely no doubt that this subject is fascinating, but there are of course two questions that come out of this debate:

  1. Are the questions we ask worth $1 billion a year? The answer is probably yes, considering if you compare the cost to America’s skyrocketing military budget, it seems trivial in comparison. Still, gone are the days where we can verify physical theories with simple telescope observations, expect the costs to only grow for particle accelerators.
  2. What are the potential technological benefits that could accrue from this device that essentially just runs experiments? Computers have not even reached the point where they are utilizing all the quantum phenomena we know about. Is verifying string theory or quantum gravity even necessary? Of course, these kind of questions are blasphemy for the scientific establishment, but that is exactly the kind of questions currently being asked by the directors of the LHC, who after not finding any evidence of super-symmetry this year, expect to move their focus to other areas that are admittedly less “sexy”.

Proponents of the LHC will invariably point to the achievement of discovering the celebrated Higgs boson. What I want to avoid is this kind of unthinking mindset, “oooh, muons! quarks! oh my!” as well as the simple dismissal of science. Oftentimes criticisms of scientific work as simply demonized as anti-Science. In short, there needs to be the kind of Latourian anthropological analysis of particle physics that currently goes on in other fields. I probably won’t be the one to do it, as it would require such a highly specialized knowledge of the field, and I am currently pursuing other projects. But I encourage others to!

What would a science studies analysis of the LHC yield? What would the “discoveries” be? Hopefully it would simply offer a realistic portrayal of what’s going on at the LHC on a daily basis.

Here’s a ray of hope that the people at CERN are concerned about practical applications of the LHC: applying the particle accelerator to developing radiation cancer treatments for people in developing countries:

This particularly hits home for me, because my father was a radiation oncologist who worked on developing treatments for prostate cancer. My father was a research scientist with an avid interest in physics (he was a physics major) – so I am no stranger to the wonders of science and what it can do for real people. In a severe twist of irony, my father passed away of cancer, and it wasn’t able to be treated with chemotherapy. Perhaps we need to do far more in the way of preventative treatment and larger policy changes, not just finding treatments for the worst case scenario. That’s not to say I’m devaluing the work my father did- he actually contracted a case of esophageal cancer that is far harder to treat.

In short, there always needs to be theoretical work done, and I recognize that. But perhaps with more humanists working in the field, more practical applications could be developed, burgeoning costs could be contained, and maybe even theoretical debates could be seen in a new light.

But my last and probably most crucial point is this: governments find no shortage of cash to throw at the largest scientific device ever created by man. Maybe they could throw some of that money at the refugee crisis, or eliminating poverty?

What we need now more than ever is a “string theory” of humanity- of how the economy, the environment, and society form an integral whole, and our arbitrary designations of where one ends and the other begins are only, well arbitrary. If string theory is meant to show the subtle interconnections between all aspects of physical reality, then maybe we should take the poetry of this elegant theory and apply it to social reality. Can we, for instance, demonstrate the relationship between human psychology and the environment, as Gregory Bateson has done in the Steps to an Ecology of the Mind? Could we not go further, and connect the philosophy of science with a political ecology? Anthropology continues to be the field that describes these interactions as all emanating from underlying dynamics within the social field, which have been revealed to be patterned, not meaningless and random. Kinship structures, political organization, ideology, cosmology, belief, values, norms- how do these function within, say, the search for a God particle? The answer we will get is pretty straightforward, as my adviser George Mentore has suggested, the cosmology that is operating here is the search for origins, the metaphor of discovery that is at the bottom of Western understandings of the self and the cosmos. Is this metaphor inhibiting “real” discovery? Is it helping? Are there different metaphorical patterns at work? An ethnographic study would reveal this in detail

Gödel’s incompleteness theorem, the Heisenberg uncertainty principle, and the physical limits on human knowledge

It seems that the universe itself has limits on how much humans can know. It is well known that string theory, in order to be proven definitively and empirically, requires energies trillions of times more than what we are currently capable of producing. Gödel famously proved not that the axioms of mathematics are inconsistent, but it is impossible to prove them definitively within the realm of logic itself (a priori proofs, without reference to the empirical world). If there are any mathematics experts in the house, tell me why this might be an oversimplification, but based on my understanding this is what Gödel’s incompleteness theorem means for mathematical reasoning. It is an astounding theorem- a theorem that proves un-provability. At bottom it suggests that the rules of mathematics are derived from the empirical universe itself- or at least that’s what it suggests to me.

In addition, the Heisenberg uncertainty principle states that it is impossible to determine simultaneously the position and velocity of one elementary particle. Even perfect knowledge of one elementary particle is denied to us lowly humans. Another limit on knowledge in the realm of physics is the cosmic horizon, an area of the universe that is inaccessible to the gaze of our telescopes because the universe is expanding too fast beyond that point for light from that distant corner to reach us.

So, in terms of scientific knowledge, it seems that the vast majority of the universe is off limits to our telescopes, describing fundamental nature of the microscopic world is currently impossible, and even describing mathematics consistently using math alone is impossible. That’s a lot of impossibles. Add to this the cosmic speed limit of 300,000 m/s for any particle, and it seems that as humans, we have hit a kind of dead end to what we can achieve. Is this true?

Well, in a temporal sense, no, because we have not achieved all that we can achieve given our current knowledge. We have not, for instance, transferred to 100% renewable energy, even though it is completely feasible socially. But will we have to take into account these inherent technological and epistemological limits? In some sense, we do. Humanity has had to realize over the course of many centuries that our almost Olympian rational powers, that kind of Greek conception of the human as close to God, as being a fantasy, and that we are not even in God’s chosen place in the universe. The world is more inhospitable than we could have imagined, humanity is not able to do whatever it pleases. The Earth itself has limitations on how much pollution can go into its skies, and so do the oceans. In this sense, humans have had to rediscover over and over again not only our finitude, but our limits.

We have parables and stories for the dangers of wanting knowledge of everything, within a Biblical frame of reference. But I am not trying to reproduce this kind of transcendental limit on knowledge, in which knowledge of the universe is limited to God, and God is inscrutable, this definitely antiquated way of thinking. It is just a supreme irony that in mankind’s Faustian dream to know everything, to have physical and mathematical theories of everything, we are confronted within our own human limits. And of course, the question has to be posed- where does this ultimate desire for knowledge come from? This Will to Truth- is a function, as Nietzsche believed, of the Will to Power, the will to dominate life and control it?

I would say that, following Deleuze, we have to go even further and reassert any kind of familiarity or “home base”, a reterritorialization. We should not become comfortable with the idea that “we cannot know everything”, and therefore give up. We should rather expand our philosophical knowledge to include these categories. Following Zizek and Lacan, we can divide knowledge into 4 categories, represented as 4 quadrants on a Cartesian plane: Known knowns, known unknowns, unknown unknowns, and unknown knowns. Known knowns are what we definitively know, unknown unknowns are things we have not even conceived of yet. Known unknowns are things we know we don’t have an answer to- something like Godel’s incompleteness theorem. Unknown knowns are things we forgot we knew, or things we repressed in psychoanalytic terms. I would add a fifth category to this typology of knowledge, to be a subcategory of the known unknowns: known unknowables, things we know we cannot answer. Just by knowing we cannot answer them, we may approach something approximating the relative truth.

Perhaps what the Buddhists said is true after all, and the absolute truth is only knowable to an Omniscient being, and only relative truth is accessible to us. Perhaps it is true that we should be satisfied with the relative truth. But we should also realize, following string theory, that there are levels of relative truth, and different levels of precision at which something can be described.


Another way to think about limitations on knowledge using Godel’s incompleteness theorem is to interpret the theorem to say that any set of axioms that describe mathematics is incomplete in the sense that there are an infinite set of axioms that describe mathematics. As such, one cannot prove that certain statements derive from the known axioms because one would need an infinite amount of rules or axioms! This is just another way of saying that what we can know is infinite, and humans are by nature finite beings, and therefore “complete” and total knowledge of the infinite is impossible.

That doesn’t mean we should stop looking! It only means we should venture forward, into the infinite abyss, looking, ever looking, not looking for any resting point. An Ultimate Theory of Everything in this sense may be impossible, as Stephen Hawking actually suggests

P.S.S: Some of these kind of themes might be explored in Hofstadter’s Godel, Escher, Bach, particularly in terms of what Godel means. However, despite having not read the book, I’m skeptical of the book’s neurobiological perspective. GEB in my mind seems to be the kind of antecedent to pop philosophy books, leading ultimately to the king of pop philosophy Sam Harris. Continental philosophy forever! Down with the analytical instrumental rationalists!

Nietzschean eternal return, Deleuzian immanence, cosmology and metaphysics

Recently, I have noticed certain parallels between the philosophies of Gilles Deleuze and Friedrich Nietzsche and the findings of modern cosmology. First I will talk about the relation between Nietzsche and modern cosmology, then I will bring in Deleuze

In a talk about the physical reasons for time’s unidirectionality, called Why is Time a One-Way Street, by theoretical physicist Leonard Susskind, I was instantly astonished that the idea of eternal recurrence is now being seriously considered by cosmologists. It was only a matter of time, however the reason for the resurgence in this idea of recurrence is due precisely due to the theoretical impossibility or the discovered lack of reliability of theories that treat the universe as a “one-shot” universe. Meaning, that cosmologies (theories of the development of the universe) which hypothesize that the universe has a beginning, middle, and end, or a Big Bang beginning and a Big Crunch ending, are being tossed out the window. In favor of what?

Well, due to complex reasons relating to probabilities of life occurring in a universe, cosmologists consider a cosmological theory that allows eternal recurrence to be a hallmark of a failure. Eternal recurrence in this context would be this: while the second of law of thermodynamics requires the universe to be constantly losing order in a system (entropy of a system most of the time tends to increase), quantum randomness entails that given a infinitely large amount of time, thermal death of a system will actually reverse and things will actually grow spontaneously ordered again. So, in other words, while we do see things gradually tending toward disorder, given unique facts about our universe, including the fact that it is expanding (positive cosmological constant) the possibility of eternal recurrence cannot be ruled out.

Why do physicists view eternal recurrence as such a big problem? Perhaps I didn’t follow along as well as I should have, but the idea that if eternal recurrence of the universe is true, in a kind of relative sense of the term (not things repeating back on themselves automatically, as Nietzsche seems to imply) then the universe would look different then it does now seems funny to me. Susskind offered the explanation that this would imply that the universe would have only one mind, called a Boltzmann brain. Here is a link to a definition of the Boltzmann brain:

My theory is that the theory of the Boltzmann brain itself is flawed, not our understanding of cosmology or the anthropic principle. It seems that we have made the right observations. Coming to conclusions about why there isn’t only one consciousness in the universe is pretty clear- evolution just doesn’t work that way, for a brain like ours to develop takes reproductive life, and therefore many lifeforms. The article mentions that multiple universes resolves the Boltzmann brain paradox. I offer this formulation of a resolution to the Boltzmann brain paradox.

The ideas of eternal recurrence, or the observed low-entropy universe as a fluctuation in a high-entropy meta-universe, and the idea of the multiverse, or the Big Bang occurring from random quantum fluctuations as a bubble forming in a meta-universe, as already hypothesized by other physicists. [Some cosmologists disagree about this, I should add, due to technical physical reasons, and still believe in a multiverse, but I’m arguing more on a broad philosophical level]. But the anthropic principle holds true- the only reason we can observe the universe at all is because we are here by chance alone. The universe as we know it, ordered, seemingly there on purpose, is the result of eternal recurrence, or the fact that there are infinite possibilities that could have occurred and we are the one that worked. The vast majority of other possible universes, whether real or imaginary, were simply voids, empty space.

From this cosmological perspective, the idea of transcendence itself seems absolutely silly. David Hume’s critique of the cosmological argument rears it head again, and one realizes based on the sheer scale and physical properties of our universe that the idea of infinite regression is not a remote possibility, but so overwhelmingly obvious as to be almost apparent.

This is where the Deleuzian idea of immanence comes in. One should not posit a transcendental dimension or component to anything whatsoever- this is where New Agers obviously stray when it comes to their interpretation of quantum mechanics. Rather than see the more obvious metaphysical implications of quantum physics when it comes to fundamental randomness or chance, New Agers would rather reassert consciousness as a transcendental plane, rather than a plane immanent to the world, a simple Being-in-the-world. Consciousness is not a mystery- and neither is thinking. What is a mystery continues to be the eternal Whole, the One-All, the plane of immanence itself.

In short, when one hears that cosmologists are reconsidering the idea of a multiverse, or that there are ten dimensions of space, don’t start foaming at the mouth looking for a window to heaven. It is only an expansion of the idea of the vastness of the universe in space and time. What is out there is mostly nothing, and vast tracts of unreachable stars. But maybe, just maybe, we will experience that quantum randomness for ourselves, that highly improbable moment…who knows what we will find?

To me, I take comfort in the fact that we are no longer at the center of the universe. That’s a lot of pressure. I’d rather be a product of infinite play of chance then the creation of an omnipotent deity. Maybe there are rules in this universe that we have not discovered yet- a kind of ontological freedom that we are only now beginning to grasp. Perhaps humanity can make of itself whatever it wants. And on a final note, I don’t think this idea, this kind of cosmic dreaming, is an attempt to rediscover the transcendent, if it is interpreted in the right way. All it means is that we are free as human beings to choose our own destinies- in fact, as Sartre said, terrifyingly free.

But enough about us. What is astonishing to me is the idea that things will, given enough time, repeat. The whole world, from start to finish, the whole history of mankind, in a different corner of the universe. This is a distinct ontological possibility. I’m glad that we are cut off from ever seeing worlds like this, these kind of parallel universes. If there is one thing mankind has learned about the universe, its that it is more inhospitable and terrifying (from a human perspective) than we could have ever imagined. The vast voids of nothing, dancing infinitely in their impersonal swirls of gas and light. It is also ironic that we essentially owe our existence to immense furnaces, points of high entropy.

Is there a satisfying conclusion to this picture of the universe? No, if only to marvel at the sheer immensity and weirdness of the infinite

Why Tibet matters

Great lecture by one of the premier Tibetan historians, Tsering Shakya, on conceptions of Tibet by the Tibetans themselves and by China. Tsering is one of the foremost Tibetan intellectuals who give detailed reasons for why Tibet is an internal colony of China and challenges the “Marxist” orthodoxy on Tibetan issues

A friendly deconstruction of Bruno Latour

In this post, I will attempt to have an extended dialogue with Latour on his terms (for my original thoughts on Pandora’s Hope by Bruno Latour, see my previous post on this subject for an introduction to the topic). I will attempt to line by line, paragraph by paragraph, engage with Latour’s text and try to extricate some meaning out of it. I really do believe this is an excellent book- groundbreaking even. It is essentially an apology for science studies, the study of science philosophically and anthropologically. Nonetheless, I will try to add to it, as an “interlocutor”.

For now, I will limit myself to Chapter 1 of the book. Latour refers throughout the chapter to the reader as a friend, and as a friend, I cannot help but engage with the subject matter as an outside observer who knows how texts are put together. This is what I mean by deconstruction. However, as Latour points out, deconstruction is often satisfied at taking apart the pieces and not forming any kind of conclusion- I will attempt to deconstruct and then reconstruct Latour’s discourse, to engage with the actual ideas put forth by him, but I will attempt to salvage the concept of deconstruction in the process.

The book starts out, as I said before, with a psychologist asking Latour “do you believe in reality?” Latour goes on to say that:

The psychologist’s suspicion struck me as deeply unfair, since he did not seem to understand that in this guerrilla war that being conducted in the no-man’s land between the “two cultures”, we were the ones being attacked by militants, activists, sociologists, philosophers, and technophobes of all hues, precisely because of our interest in the inner workings of scientific facts. Who loves the sciences, I asked myself, more than this tiny scientific tribe that has learned to open up facts, machines, and theories with all their roots, blood vessels, networks, rhizomes, and tendrils?…Then I realized that I was wrong.” (3)

Latour portrays himself as a lover of science, not its “deconstructer”, in the sense offered by postmodernism and Derrida. He will turn to this question later. However, he goes on to say that he acknowledges this is not how science itself sees him, and science studies, that science views it as an attack on its legitimacy. Here Latour acknowledges struggling openly with this question of the “science wars”. First, he tries attempting to find a “genealogy” of the epistemological perspective of science. He starts with Descartes, and here I agree with him, when he says that the concept of an “outside” reality separate from us started principally with Descartes.

“Descartes was asking for absolute certainty from a brain-in-a-vat, a certainty that was not needed when the brain (or mind) was firmly attached to its body and the body thoroughly involved in its normal ecology…Absolute certainty is the sort of neurotic fantasy that only a surgically removed mind would look for after it had lost everything else…For Descartes the only route by which his mind-in-a-vat could reestablish some reasonably sure connection with the outside world was through God. My friend the psychologist was thus right to phrase his query using the same formula I had learned in Sunday school: “Do you believe in reality?”- “Credo in unum deum”, or rather “Credo in unum realitam”

In this passage, Latour points to the fact that the Cartesian cogito is a disembodied mind, that Cartesian dualism has been an error that has divorced the mind from the body and thereby formed the basis of the rationalist error which Nietzsche sought to correct, by remembering that the human being is also a flesh and blood body that feels pain. “I think therefore I am”- the cry of the navel-gazing philosopher. So far so good. I wish there was a kind of summarizing statement, despite the fact that he quotes Lyotard, that science in its current form contains a metanarrative about reality, the Cartesian metaphysical inheritance and baggage. Maybe he’s trying to convey this to a more popular audience, but he doesn’t have to. The book actually becomes very technical.

Here’s where I start to take issue with Latour’s philosophical narrative. Latour goes through the admittedly “slapdash” history of Western philosophy by saying that the conception of the mind got slowly more disconnected from reality, essentially implying that Western philosophy was committed the error of solipsism. Not quite, but Latour is doing two jobs at once. First, he dismisses the epistemology of empiricism and the tabula rasa by saying that in this conception of the world “they were still dealing with a mind looking through the gaze at a lost outside world”. Latour basically claims that the principle of the cogito still acts in empiricism, subtly, even though empiricism first and foremost a critique of Cartesian rationalism. I fault him for not going further into this, even though I think he has a point about this part. Latour starts going off the rails when he talks about Kant.

This philosophy was thought, strangely enough, to be the deepest of all, because it had at once managed to abandon the quest for absolute certainty and to retain it under the banner of “universal a prioris”, a clever sleight of hand that hid the lost path even deeper in the thickets. Kant had invented a form of constructivism in which the mind-in-a-vat built everything by itself but not entirely without constraints: what it learned from itself had to be universal and could be elicited only by some experimental contact with a reality out there.

Before we get into Latour’s interpretation of Kant, what is this “lost path” of which he speaks? Naive realism. No joke! Here is his conclusion:

“Through a series of counter-Copernican revolutions, Kant’s nightmarish fantasy slowly lost its pervasive dominance over the philosophy of science. There was again a clear sense in which we could say words have reference to the world and that science grasps the things themselves. Naivete was back at last, a naivete appropriate for those who had never understood how the world could be “outside” in the first place.”

Now, before we get into this, I’ll give a brief synopsis of what Latour calls the “fear of mob rule” and its effect on scientific reasoning. This is the best part of the chapter by far. It offers a sociological and genealogical explanation of why the psychologist asked the question. He traces its deep roots back to Plato’s Gorgias and how Socrates’ argument with Callicles, which took the form of a debate on whether Might makes Right, was already in an aristocratic frame in which Might was a moral might of the ruler, not the mob of common people that had only “brute strength”.

Here is Latour’s summary of it:

As I said, two fears lay behind my friend’s strange question. The first one, the fear of a mind-in-a-vat losing its connection to a world outside, has a shorter history than the second, which stems from this truism: if reason does not rule, then mere force will take over”  (10)

In essence, Latour is saying that scientists want to guard against the postmodernist idea what is true is only what the mob deems true at a given moment, and thus more input by non-scientists into the field is seen as an attack on scientific objectivity. This statement, taken by itself, is pure genius. The two fears are essentially correct. My problem is what goes around them. The fear of the mob is also most associated with an essentially Hobbesian politics, where the human subject is essentially something to be controlled.  No mention of Hobbes in the book. This can be forgiven in my mind, but his defense of naive realism and dismissal of Kant cannot.

But first let me get to Latour’s conclusion:

Science studies, as I see it, has made two related discoveries that were very slow in coming because of the power of the settlement [between epistemology, morality, politics, and psychology] that I have now exposed- as well as for a few other reasons I will explain later. This joint discovery is that neither the object nor the social has the inhuman character that Socrates’ and Callicles’ melodramatic show required…When we say that science is social, the word social for us does not bear the stigma of the ‘human debris'”. 

Again, this is brilliant. The chief insight is the connection between fear of mob rule and scientific construction of facts.  Latour attempts to reconnect reality, by way of a simple diagram, by saying that nature, society, mind, etc were never separate. My problem is namely the epistemology he defends at the end of all this, the epistemology I claim is not his real one.

Latour invokes the poetics of a world in which the objects shape people and people shape objects, the scientist is shaped by his experiments as much as he shapes his experiment. What absolutely puzzles me is that despite Latour’s focus in later chapters on scientific method and on the precise way that scientific data is categorized, systematized, etc. Latour chooses to defend a naive realism. I chock this up to the fact that Latour is associated with object oriented ontology. As I have stated in other posts, OOO talks about the interaction between objects in the real world. Latour chooses to see the construction of scientific facts as a result of interaction between objects, social actors, etc. Latour defends interdisciplinary studies in this way, in which more connections is good. I have no problem with this. My problem is his obvious disdain for Derridean deconstruction, which I believe accounts for his disdain of Kant.

For one, Latour has his philosophical history incorrect. Latour should find a worthy ally in Kant in science studies, with a few tweaks. By taking Kant’s transcendental a priori categories through which the mind views reality, and substituting leaned cognitive interpretive mechanisms, Latour could see Kant in his proper context as properly revolutionary, instead of a relic of a bygone era. Latour does not recognize how his entire project, indeed his entire philosophical existence, depends on Kant’s innovation, on Kant’s simple recognition that the reality is experienced through the mind, and that space and time are categories by which we experience reality, not natural categories inherent to reality itself. Latour, despite this, seems to defend a Newtonian view of the world proper to Descartes, who he rightfully despises. Latour has abandoned his philosophical predecessors as simply those who confused and constructed virtual “prison cells” of thought, who kept us from viewing reality as it always was, just simple connection, simple embeddedness. Latour thus, in a very strange way, implies that we should analyze things and interactions between people “through the lens of our everyday experience of reality”, believing our eyes, etc. As if this is not what the scientists do that he is supposed to critically examine! Latour does not go as far as to defend “common sense”, since he believes this common sense in our culture has been hijacked by fear of mob rule and the Cartesian cogito, but he commits another sin which is not as easily recovered from- ultimately believing in his perceptions. In that sense, his critique of empiricism is also strange, because he seems to be one, just like the philosophers he intends to criticize.

In short, Latour, in the introduction to his book on the critical study of science, of how science actually is practiced, ends up being an apology to his critics. Latour ends up making the absurd statement that “we in science studies may have finally found a way to free the sciences from politics”- unless I’m missing some deep point, the idea that science can be ultimately extricated from political and social forces, or even redeemed of these forces, made cognizant of them so that that “bias” can be eliminated, is absurd!!

In my naivete, I had assumed before reading that Latour was a Foucauldian. Now I realize he must find Foucault abhorrent (he states this explicitly in a later chapter), rather than acknowledging that he owes a great debt to Foucault, without whom he might not exist. Maybe the simple expression “Power/Knowledge” is an oversimplification- the idea that they are inextricably interlinked, especially in the realm of what was previously considered “objective” hard science, is now unquestionable. His purposeful distancing of himself from his French forebears is bizarre.

Latour consciously positions himself as a Researcher, as a defender of Research, and an anti-postmodernist, in order to rid himself of the stench of the humanist that he is. He positions himself as completely in between the sciences and the humanities, rather than simply a social scientist, an anthropologist. Latour is at his best doing what anthropologists do- simple and clear thick description. Normally, anthropologists start out their books without complex philosophical interludes, explaining their research methods. Not for the hybrid philosopher-anthropologist! Latour is so obviously and culturally French that its impossible that he is aware of it. This is why Latour’s critique of postmodernism’s obsession with reflexivity, of questioning one’s own interpretation, seems suspicious.

In conclusion, Latour’s project itself is interesting and worthwhile. I am less than impressed with the philosophical history, despite his pedigree. There seems to be a veritable obsession in his work with theory, and despite this, I see that he has taken sides in a philosophical battlefield and revealed certain prejudices. The section in the first chapter about what science studies offer the sciences and the humanities is very telling, but can essentially be boiled down to a call for more collaboration. The idea that philosophical revelations will unfold or that the sciences will once and for all overcome mortal human weaknesses I find not only funny, but something that undermines what Latour is trying to do. I believe Latour could approach this whole project with a deeper degree of humility.

Latour will undoubtedly see my critique (if I he ever sees a lowly grad student’s musings on this subject) as the ramblings of die-hard humanist, deconstructivist relativist. Far from it! I simply believe that Latour needs to retake Anthropological Theory. Through my anthropological training, I cannot help but see any text, including Latour’s, as the product of culture, or more precisely, as the limited product of a human mind, necessarily unfinished and driven toward a particular audience. His book, including the chapter “The Invention of the Science Wars”, is meant to allay the fears of the scientists he studies- “do not worry, we are your allies!”. Latour does discuss how politics intertwines with science of course, through funding, institutions, etc. But must he realize that science is never neutral, that there are ideological enemies, that the science wars are already raging among the scientists themselves! Of course he must, he knows this subject better than anyone else. But Latour basically suffers from a principle misconception of what deconstruction is, which Derrida criticized so often its almost impossible Latour himself didn’t hear it in grad school. As an interpreter of Derrida says:

“Deconstruction is not synonymous with “destruction”, however. It is in fact much closer to the original meaning of the word ‘analysis’ itself, which etymologically means “to undo” — a virtual synonym for “to de-construct.” … If anything is destroyed in a deconstructive reading, it is not the text, but the claim to unequivocal domination of one mode of signifying over another. A deconstructive reading is a reading which analyses the specificity of a text’s critical difference from itself.”

Latour’s critical difference from itself, the construction of a kind of master discourse about Science, without the necessary degree of reflexivity, is what makes his work, in my mind, at least in this first chapter, insufficiently anthropological. In other words, the interpretive mechanisms through which Latour analyzes a given scientific practice are largely unstated, and the inner workings of his thought process and how he comes to a conclusion (reflexivity) is apparent in this text.

“Latour has a well-referenced bibliography, that’s how he came to his conclusion! His thought process is apparent throughout the book!” Latour’s dismissal of the concept that Western philosophy has relied on a metaphysics of presence over absence reveals his lack of awareness of how exactly the construction of an “outside world”, his principle question, is even accomplished. The metaphysics of presence, logocentrism- all of these could be Latour’s tools and conceptual weapons. Instead, he chooses to ally himself with naive realism, and leave the blueprints of his narrative of philosophical history sketch to the imagination. That is what I mean

Any questions?


Preliminary thoughts on Bruno Latour’s Pandora’s Hope: Essays on the Reality of Science Studies

More and more when I delve into the subject of the interpretation of science this question keeps popping up: What is Reality? Does Reality exist? Moreover, does Objective Reality exist? Are those two terms different?

Now often, I jump the gun, I have barely scraped the introduction of Latour’s Pandora’s Hope and already I know what he is going to say (probably because I read the back of the book). I’m also simultaneously watching a talk by string theorist Leonard Susskind and reading the opening chapters of a book currently being written by quantum physicist Ron Garret. But I need to get my thoughts out on paper before I lose them, or they change given new information. Here goes:

Latour was inspired to write his book when a psychologist asked him “Do you believe in reality?” The intention behind the question is obvious- are you some kind of postmodernist that doesn’t believe in reality at all, undermining all of science? Latour answered of course he does (followed up by asking what’s your point) and he was offended because he thought scientists should understand that those doing science studies were trying to make the sciences even more objective. Then Latour realized the political dimensions of saying that scientists are fallible human creatures, etc…If I am stretching or misrepresenting what Latour thinks, I apologize, but I’m previously acquainted with his views in We have Never Been Modern.

Latour asks on the back cover- why did the idea of an independent reality free of human interaction emerge in the first place?

Here’s my answer to that, as yet uninfluenced by what Latour has said:

The idea of reality as independent of human interference is essentially a Western construct, necessary for the very existence of science in the first place. It was basically a postulate used to try investigating reality itself, to the point where reality and objective reality are synonymous.

However, now scientists have to deal with “observer bias” and all sorts of phenomena in which human interference changes the parameters of what’s being observed. It’s no secret that everything is connected- it’s intuitive! This doesn’t imply anything strange or mystical at all, its very simple. Step in a river, and its a different river than it would have been if you had not stepped in it (maybe not by much, but it is different). This principle goes for social sciences like psychology and anthropology as well as the hard sciences such as physics.

Therefore, quantum physicists, through various phenomenon that have to take into account the physical effects of observation, and the various laws that come out of that like the Heisenberg uncertainty principle, were the first to abandon this notion of objective reality, or reality independent of human “observation” (I clear up some of the New Age misinterpretations of this in another post). What I’m getting at is that some scientists may be turned off to the notion of denying that there is such a thing as objective reality because our terms have been conflated. We are essentially talking at cross purposes. The other reason may be that like Latour says, it threatens their very reason for existence.

To me this is not rocket science at all. This is because from a young age I have learned the all the terms of Buddhist metaphysics, namely, dependent arising, interdependence, cause and conditions, the lack of inherent existence of any phenomenon, etc. This last one, the lack of inherent existence of any phenomenon (shunyata), or the lack of independent existence of any phenomenon, is what the lack of something called “objective reality” I think really means. No need to invoke general relativity at all. Its perfectly clear to anyone given enough thought, and every single student of Buddhism has had to be led through this “insight” meditation. Through a certain traditional “thought experiment” the student comes to realize that because no object is independent of any other, the designation of that object as an object as such is relative. This is the literal term that is used! In Buddhist metaphysics, there is absolute reality (in which there is no things as such) and relative reality, in which there are things that exist relative to other things and relative to our minds.

Maybe its because I have been sort of inculcated in Buddhist ways of thinking, namely the Middle Way school of thinking common to Tibetan Buddhism, that I find these debates about whether there is reality outside of human interaction generally very boring.

And this comes back to my original “critique” of Latour. My critique of Latour and company wasn’t that I disagreed with them- I do agree with them! It’s that the way they come up with their conclusions is basically reinventing the wheel. I contend that the only reason we were able to get outside this Western philosophical frame of reference (starting with Nietzsche, Heidegger, etc) is exposure to the non-West. It’s well known that Schopenhauer was one of the first to study Buddhism in Europe seriously based on new translations, and Schopenhauer was one of Nietzsche’s primary influences, etc.

What I am saying is this- if Latour came to his conclusions independently using science studies and his own “genealogy of thought”, then I give him all the credit in the world. A quick look through his index reveals references to Foucault, Leroi-Gourhan, Lyotard, etc. All people who have influenced me- but all I’m saying is that Latour is still using in what me and fellow philosophy blogger Landzek at Constructive Undoing are attempting to call the “scholar’s paradigm”, which is basically a reliance on Authors and Authorities with a capital A (something Foucault and Derrida, incidentally enough, wrote a lot about).

So what does Buddhist philosophy have to say about independent objective reality directly? Buddhist philosophy, before talking about reality in general, usually approaches the topic of emptiness of inherent existence by way of talking about selflessness, or the lack of an inherent self or “I”. The primary Buddhist philosophical text (as opposed to a strictly religious text or sutra) that relates to this concept is the 7th century philosopher Nagarjuna’s Treatise on the Middle Way. In the Eighteenth Chapter of that treatise, Nagarjuna states that:

“If the aggregates were the self, it would be produced and disintegrate. If it were different from the aggregates, it would lack the aggregates’ characteristics”. 

This is a formulation of emptiness in terms of selflessness. In the commentary it explains that this verse means that because there is one self, because the self is made up of many aggregates, then it cannot be identical to its parts. Similarly, if it something completely different than the parts, then it would lack the characteristics of the parts. Seeing that humans lack an inherent identity is much easier to show then an object’s lack of inherent existence, however.

Nagarjuna goes on to say that:

“Whatever arises relying on something else, is from the outset not that. Nor is it other than that. Thus it is neither non-existent nor permanent.”

This is a statement about causality. A cause is neither completely identical to nor completely different than its effect, because the cause transforms into the effect. In the words of the commentary on the text, “dependently arising nominally imputed phenomena  are neither inherently one with nor inherently different from their causes and conditions or parts and their mode of existence is between the two extremes in that they are neither totally non-existent nor do they have any reified existence”.

This great tradition of the Middle Way (Madhyamika) school of philosophy founded by Nagarjuna was studied for at least 4 years by every monk in many great monasteries of Tibet. Maybe you will say that I am appealing to another kind of authority in asserting that Buddhist philosophy has something to say about this problem of objective reality. My biases aside, I can tell you that these fundamental truths are not dependent (funny enough) on any religion, they are truths about reality itself. But Buddhism has and will continue to have more to say on this subject precisely because it has been the topic of intellectual for well nigh two millennia. The idea that they were just all sitting around chanting in monasteries for two thousand years is a false one based on lack of exposure to the tradition. Many other  Eastern schools of thought have other things to say about this, namely Zen, Taoism, etc. It’s the lack of a single reference to Eastern philosophy in something like Latour that really bugs me, and it just furthers our Western mythology of scientific objectivity that Latour is trying, whether he likes it or not, to undermine.

Despite all I’ve said, I’m sure that I will thoroughly enjoy Latour’s book. To be accepted, Latour generally has to go through certain conventions and hoops, and I perfectly understand that. His project of science studies is something different than a rethinking of traditional Western ontology through Buddhist categories. My hope is that one day the name Nagarjuna will be cited like Hegel in an academic bibliography. Then maybe one day my job as an anthropologist has been accomplished.


More stuff on string theory- great lecture


If you want to know what’s actually going on in some awesome new area of physics, don’t watch a ******** Ted Talk. Watch an actual lecture. Here, Leonard Susskind, world-renowned theoretical physicist and professor at Stanford University, presents some of the theoretical and experimental origins of string theory (yes there is experimental data which gives some credence to string theory- I was blown away too!). Let the particle physicists be the experts on particle physics- now was that so hard?

I haven’t watched the whole lecture yet (it’s a whopping 1 hour and 45 minutes) but I can tell you that this isn’t some bogus idea coming out of the mind of some amateur- to think that it ever was is pretty ridiculous. Whether its true or not is a different story. But it certainly would explain certain patterns found in the data.

P.S: I’ve had a rethinking on the value of Bruno Latour and his whole project on the anthropology of science. I think I was too quick to judge his contribution- he’s probably famous for a reason. I just checked out one of his books called Pandora’s Hope: Essays on the Reality of Science Studies. I know Latour also wrote a book on early experiments in quantum physics. I wonder what he would have to say about string theory. We’ll see what I think at the end. More posts to come, stay tuned!