Open Criticism

"…our attempts to see and to find truth are not final, but open to improvement…" Karl Popper

At the Margin of life

 

            The inevitable fate of every human is death. Yet, its ubiquity does not make it any less of a tragedy. The ritual of funerals, annual visitations to memorial sites, and other associated behaviors affirm this sentiment. The emotional disturbances that are accompanied by death’s proximity suggest to us that it is perhaps among the most tragic of features associated with vital existence. Perhaps, the only other feature that descends lower than this is the sort of suffering that makes death a comforting notion. Irrespective of the value that we might assign to life, a reckoning needs to be made when certain extremities are placed on it. That is, one’s life is certain to have an end. If a professional has divulged to a patient that their life has reached its margin, and it is accompanied by suffering, operating from a value-based ethics of care, assisted death should be a practicable option.

            Variables such as the reduction of health care costs, or expending resources and time preserving a terminally ill patient rather than utilizing resources and time to helping a patient who is not at the margin of their life seem arbitrary or impersonal. To an extent, these factors remain arbitrary under the assumption that all life is valuable. But the principle becomes less symmetrical once certain boundaries are in place. That is, there might be others who are in critical condition and the attention that is placed on our terminally ill patient might be a cause of deficiency in the attention that could be placed on the patient who is in critical condition but has a real probability of recuperating. However impersonal, these are genuine variables. It’s essential to acknowledge that the asymmetry between the two patients does not necessitate an asymmetrical value. Further, to accelerate this unavoidable fate does not necessitate a devaluing of human life—nor does mean that we should emulate the decision in a scenario that does not include life marginality.

            Perhaps we might traverse a bit further down the impersonal route. It is commonly known that the world population exceeds seven billion people. An exact number for overpopulation is dubious—if possible to know—it could be manifesting presently. It appears odd to North Americans that the Chinese have implemented laws that limit a family’s child capacity in order to prevent over population, since their Capitalist principles encourage—if not stimulate—exponential population growth. Growth is an inveterate feature of the Capitalists natural order of society. Accruing growth, however, poses an inimical situation for the biosphere; its resources and capacities are, of course, incapable of maintaining a certain occupancy. This is not typically a concern for Capitalism’s interest in limitless growth. It wouldn’t be unsurprising if an extension of this principle transfers into the way we conceive of preserving life. Accepting this seemingly impersonal paradigm makes the ostensibly arbitrary costs and resources a more judicious prospect.

            The transference of Capitalist’s limitless growth model onto medically induced preservation of life has made it more challenging to actually die. The seemingly arbitrary notion of pitting money against this process, again, requires rethinking. “By law, Medicare cannot reject any treatment based upon cost. It will pay $55,000 for patients with advanced breast cancer to receive the chemotherapy drug Avastin, even though it extends life only an average of a month and a half; it will pay $40,000 for a 93-year-old man with terminal cancer to get a surgically implanted defibrillator if he happens to have heart problems too” (Fisher). The implications compound, “[l]ast year, Medicare paid $55 billion just for doctor and hospital bills during the last two months of patients’ lives” (Meyer).

What is not provided to us is an abstruse or romantic conclusion—but we aren’t even replacing a sermonic profundity with the impersonal, sardonic tone of one who has given up because their optimism has been sufficiently eradicated. Instead, there is a certain submission to our fragility. Our conception of the limitlessness of resources and growth is one that needs to be reconsidered—its implications end with a tragedy that can be avoided. Further, the transference of this system, not only in principle, but in pragmatic ways, needs to be rethought. Indeed, a reckoning needs to be made: we are not limitless. Instead, we are born into fragility, but also into a cooperative, complex system, where small decisions in input have large impacts on output and our existence is an inherent aspect of that system.

An Incongruously Varied Reality

   —Abstract      

The impetus that is directing the hegemonic enterprise of physics is the conviction that there is an underlying feature of reality that, if found, would function to unify everything. Concealed beneath the camouflage of this quest is the assumption that reality is rational, ordered, and that it is inveterately held up by universally disseminated laws. The rationally ordered universe is associated with its enlightenment heritage, and more particularly its Newtonian heritage. Similarly, orthodox economic theory, operating from the assumption that the levels that have been delegated onto reality function as a pyramid, reductively, so that, what is true in physics is going to devolve naturally into the social scheme. While the prowess of the Newtonian heritage has managed to be shaken by Einstein’s relativity and the quantum revolution, both physics and economics have managed to perpetuate the axiom of the universal law, and their conviction in the unification of reality. Operating from a trajectory that this notion is a contrived one and that the very infrastructure of the hegemonic monolithic heritage requires restructuring—in which a complex system is nothing but the sum of its parts—into an adoption of diversity, complexity, and the motely expression of reality, an alternative reconfiguration of old ways appears to provide the best route.

Two Accounts of Technology

 what might be frustrating about these brief accounts is…their brevity. I deal with ideas that are sophisticated, largely beyond my cognitive reach, and write about them with some sense of definitiveness. 

A: Technology is beneficial

            Technology, modernly conceptualized as the synthetic commodities that have developed, as if inevitably, within and by large, and escalating, parts of humanity at such a close proximity that its phenomenon might conceivably be understood as an extension, coordinated with humanity; an inveterate relationship that does not appear to be slowing down in its vivacity to be disseminated universally. The relationship is derivative, arguably, from humanities distinguishing trait: the capacity to manipulate. If its distinguishable trait, indicated by technology, is to be trusted as our species’ mechanism for survival, then it must be augmented with a beneficial orientation that anticipates this trait to maintain the survival of the human race—an optimistic outlook. 

Physiologically speaking, humanity does not appear all that sophisticated, comparatively. In every capacity: physical strength, coordinated walking, speed, visual capabilities, hearing, sense of smell, efficiency, humanity measures up poorly with its alternatives in the animal kingdom. Considering such characteristics as the pigeon’s sense of direction, the cheetah’s speed, the agility and elegance of a cat, the visual acuity of the hawk, the echolocation of the bat, a biological sonar that allows it to navigate and hunt in the dark, and so on, the human physique is, by far, inferior. Physiologically, humanity is evolutionarily distinct for its regular experience of back pain due to its skeleton’s unaccommodating compatibility to erected posture. How is it then that humanity has managed to dominate the earth? This has come-about, indeed, we have made this our dominion, primarily by manipulating and improving upon the physiological features of other animals that outmatch us. This manipulation has otherwise been termed technology. By the prowess of our trait, we have surpassed, simultaneously, the speed of any animal, along with the capacity for flight, in our invention of aerodynamic flight, reaching not only supersonic speeds, but transonic speeds, as well. Further, while accomplishing these feats, included is the technological equivalent of the bats echolocation, sonar—to provide further example of the sort of manipulation that is taking place.

We have done more than this. We have emulated our metaphysical personas: gods, heroes, etc. Accounts of divine figures that heal the wounded and impede death are daily rituals that are practiced today, though, they are not delegated as transcendent invasions of salvation but, immanent modern capabilities derived from amassed knowledge of biology, anatomy, and medicine; supercomputers, and technological capacities that, in some aspects, supersede the operations of the human brain—the most complex entity in the universe, as far as we know. This reality dissolves the transcendental as it suggests to us that what we have, essentially, is our capacity to transcend our current capabilities; that we have invented that which our ancestors ritually prayed for. These older traditions become delegated as previous ways in which humanity has manipulated their reality to adapt and survive. Perpetuating the transitions, which have accelerated technological advancement will, irrespective of temporal setbacks, and various harmful consequences, prove to be beneficial for humanity’s survival.

B: Technology is inimical

 Technology is unending in its inventiveness, a machine with capabilities that have been invented by humanity, but are now, simultaneously, reinventing reality, reinventing humanity; terms such as “progress,” “advancement,” and their cognates, are rhetorical alternatives for “domination,” and “control” that perpetuate the escalation of an inimical, oppressive reality. The exertion of control that has functioned as the driver of modern technology, as a non-neutral entity, is responsible for excessive deforestation, depletion of marine life, and other wild life, among other environmental contingencies, and cultural destruction and assimilation.

Humans are, by far, the most invasive species on the planet. Deforestation has escalated to over 90 percent of the world. The force that is our drive for control expends the energy and resources of the planet. Not only this, but all of our inventiveness engenders an exceptional amount of waste which is, again, not advantageous for the planet. It is indisputable that humanity’s exertion to dominate and control is inimical to the planet.

It is more than this. Cultures that are delegated as “behind” for maintaining an organic relationship with the land are considered “primitive” relative to the dominant, “mature” culture of the technological revolution. Cultures have been destroyed, or otherwise assimilated into the dominant one.

Further, the dominant culture is insatiable in its desire for control, demonstrated by its turning in on itself, the members of its ostensible “progress,” invasive on their privacy, reading their emails, listening to phone conversations, enabled to have full surveillance, monitoring the activity of its own members irrespective of how one feels, or whether these members are even aware. These few, prominent examples suggest that the rate at which technology “advances,” is the rate at which we are desecrating the land, subsuming culture(s), and losing freedom. 

Neurons

“If we are to understand how the mind-brain works, it is essential that we understand as much as possible about the fundamental elements of nervous systems, namely, neurons.”[1]  

It will be helpful to have an understanding of what and how a neuron works and allow this to inform our conception of memory, and ultimately to shape our perception of our humanity.

For starters, the neuron is the essential foundational building block of the brain’s function. Neurons consist of a series of parts that channel information via electrical and chemical signals. The event of signaling takes places in the synapse, the conclusion of the axon, communicating with other neurons to construct neural networks. The essential components of a neuron are the dendrites, cell body, and axon. The dendrites receive signals and the axons send them. This network of communication should not, however, be thought of as random; that is, signals travel one way, and only one way, and are rather predictable.

When Golgi accidently discovered a way to stain the neurons in a piece of brain tissue, and was able to actually see these neurons, he noticed that the neurons were connected—at least, this is how it appeared—forming a large interconnected network. However, when Cajal had expanded on Golgi’s work he discovered that, though connectivity is not the way in which these neurons ought to be conceptualized, they do not actually touch but instead are separated by a small gap called the synaptic cleft. A synapse is the structure in which one neuron communicates with another. Central to the understanding of synapses is their plasticity; that is, through experience, synapses change.


[1] Churchland, Patricia Smith. Neurophilosophy: Toward a Unified Science of the Mind/Brain. (MIT Press: Massachusetts, 1986), p. 35. 

The Triune Brain

  The evolutionary structure of the human brain presents an amalgamation of older and more recent evolutionary structures. Its layers reveal the marks of humanity’s continuity and discontinuity with other species, causing the conceptual line between human and animal to blur. A predominant view in neuroscience is that the brain encompasses three layers of evolutionary development. Neuroscientist Paul D. MacLean first delegated this tripartite structure the “Triune Brain” in 1969 during his Hinck’s Memorial Lecture, when he proposed the structure. The lowest level, the reptilian complex, is the unconscious part of the brain; it is responsible for essential controls such as body temperature regulation, reflexes, blood pressure, breathing, and heart rate among other things. The middle level, the limbic system or otherwise termed the paleomammalian complex according to its evolutionary history, is responsible for emotional, parental, feeding, and reproductive behavior. The upper most level of the brain is termed the neomammalian complex, this designation is precisely due to its recent position in evolutionary history; as one goes higher up in the brain, one encounters the more conscious aspects of the brain.

The neomammalian complex is the aspect of the brain that initiates discontinuity with other species establishing what we consider the aspects that set us apart from the animal kingdom.

Why ID is considered unscientific

Science is the employment of methods within the trajectory of discovering natural phenomena. Inevitably, certain exclusive definitions of science have disallowed some things over against others to be determined scientific; all of which are marked off by the parameters that are laid down by the scientific community. For a hypotheses to be considered scientific it needs to be able to undergo testing. Then, if it is untestable it is not to be associated as scientific. Perhaps, due to the extent of discovery and the rigor of experimentation that accompanies science, the social status of a theory as scientific remains significant. A prominent example of this is Intelligent Design; the argument that all natural phenomena is the result of an intelligent designer. Proponents of this view have insisted that it is a scientific theory, and that it ought to be taught in schools along side of Evolution. Intelligent Design has labeled itself scientific on the basis that it has exercised some techniques associated with scientific methodology, particularly observation. However, in spite of those convinced of the unmitigated elegance of the universe and the conclusion that the only necessary implication is that it has been precisely constructed by an intelligent being, the scientific community at large has not been convinced. The inability for Intelligent Design to be falsified has determined it an unscientific theory.

William Paley

 Perhaps, it will be worth the space to discuss William Paley who is considered the father of Intelligent Design (though, he built off of the work of John Ray). William Paley held a particular influence on the science of the eighteenth-century and early nineteenth-century. More particularly, Paley was known for his popular argument for the existence of God that started with the natural world. Paley’s argumentation was that nature presents itself with a clear teleological structure and design, and that this could not have been accidental but rather derivative of an intelligent being. Famously, Paley employed an analogy that has often been called the watchmaker analogy. He argued that if you were to come across a watch—even if you did not have any previous knowledge of a watch—you would determine by the complexity, and intricacy of the watch, that it was designed by someone for a certain purpose. In this line of argument, if we perceive that there is complexity and design in the universe, and if this complex design demonstrates a predictable order then it follows that it must have been designed. In Paley’s Natural Theology he argues that, “…when we come to inspect the watch, we perceive…that its several parts are framed and put together for a purpose…”[1] It follows then that the motion of the planets, the structure and order present in nature demand that their cause was due to the mind of an intelligent being.

 David Hume’s Critique of Paley’s Analogy

The Scottish philosopher, David Hume, proposed that the analogy was essentially an imperfect one. Hume first argued that design does not always entail a designer; that is, an example does not necessitate what is normative. Simply because a watch is found, and determined to be the product of intelligent design does not necessitate that another object–simply because it appeared to have design similar to the watch–was designed by an intelligent being. Instead, it could be that certain mindless conditions in a natural process allowed for a complex design. In more contemporary times, philosopher and cognitive scientist, Daniel Dennett, argued that we are simply not adjusted to thinking in terms of probability. Imagine:

“…you meet a gambler who claims that he can produce a man who…will win ten consecutive coin tosses. You take the bet, knowing that the odds against anyone’s winning ten straight coin tosses is 1,024 to 1. The gambler shows up the next morning accompanied by 1,024 men, who proceed to toss coins…until only two men remain for the tenth and final round, the winner…won ten straight coin tosses…there is no reason at all why he won, there is only a very good reason why somebody won.”[2]

Further, Hume argued that if the analogy turned out to be successful it would not necessitate the assumption that there is only one intelligent designer, or that this intelligent designer is related to a particular religious organization. We would know only one thing about the intelligent being(s): they are intelligent in some way that is akin to human intelligence. This intelligent being could in fact be morally indifferent, or even imperfect in intelligence. A sort of final blow to Paley’s argument was that it incurred an infinite regress; that if the analogy is modeled from human design to human intelligence, and from natural design to some ultimate intelligence then it would follow that something else would be required to have created this intelligent being; this is because within the intelligent design model nothing that is designed is such without a designer. In other words, there can be no self-contained design. It should be noted that Paley was assertive in certain ways that were not necessary. For instance, he asserted strictly without clear argumentation—let alone evidence—that the apparent designer must be a person, and that this person must be God; the particulars are dubious and no argument is provided.

Darwin

 Ironically, it was the sort of natural theology proposed by those such as Paley that set Darwin off into the trajectory that he took in his studies; initially taking the route of theological studies, perhaps, as a socially safer direction. Despite the sort of reputation that many who espouse Darwin’s theories today have had with religion–some exceptionally arrogant in their dealings with religious folk—Darwin spoke highly of William Paley’s natural theology. Despite Paley’s influence on the route which Darwin took in his research, his famous voyage of the Beagle was ostensibly a turning point in his understanding of the natural world. On this voyage Darwin became unconvinced of William Paley’s assumptions about the natural world. Instead, through his observations he saw an entirely different narrative underlying the natural world. Further, his concept of natural selection handicapped the notion of an intelligent designer. In his autobiography, Darwin says:

“The old argument of design in nature, as given by Paley, which formerly seemed to me so conclusive, fails, now that the law of natural selection has been discovered. We can no longer argue that, for instance, the beautiful hinge of a bivalve shell must have been made by an intelligent being, like the hinge of a door by man. There seems to be no more design in the variability of organic beings and in the action of natural selection, than in the course which the wind blows. Everything in nature is the result of fixed laws.”[3]

Natural selection is the theory that has dominated the intellectual landscape since. Darwin has posited a picture of the world that begins from the bottom-up. Many have insisted that Intelligent Design does the same sort of thing—observing the natural world and constructing theories from it. However, the issue has been noted before, to posit that there is design in the cosmos does not necessitate that there is a designer. The reason for which Evolution is considered more scientific than Intelligent Design is because Evolution accounts for more of the data than Intelligent Design. The probabilistic workings of natural selection allow for law, order, and design to some degree. The developments in evolution are not in fact random, but rather work out in a selective process. Intelligent Design presents an incoherent account of the inelegance inveterate in the natural world. Consider for instance, vestigial organs, a profound problem for Intelligent Design.

However, there are indeed problems that are faced by those who espouse evolution. What seems then to be the primary reason for suggesting that Intelligent Design is not a proper scientific theory is its inability to be falsified; that is the existence of an intelligent designer cannot be tested. Falsification is a central issue in scientific methodology primarily because it is the measurement for whether a theory is capable of being tested. It is this central notion of testability that engenders the scientific community’s exclusion of Intelligent Design from the status of science. To be specific, it is not that design itself is considered untestable but rather the intelligent being who goes unobserved, and it is projected that there is no way of possibly observing this intelligent being.


[1] Ed. A.S. Weber. 19th Century Science: An Anthology.  (Broadview Press: 2000), pg. 19.

[2] Timothy Ferris. The Science of Liberty. (HarperCollins Publishers: 2010), pg.263.

[3] Charles Darwin. Autobiographies. (Cambridge University Press: 1986), pg. 50.

On Emotions and Reason

A philosophical account–to be cont’d neuropsychologically/physiologically

In the history of philosophy, reason has been commonly distinguished from the passions. Descartes categorizes these feelings as lower aspects of the human, using the phrase “animal spirits.” Following this tradition, Leibniz and Kant both had similarly negative conceptions of emotions. Instead, philosophers have had the disposition to prefer the ostensibly higher aspect of the human mind: reason. However, one can find prominent philosophers, such as Hume, who come to the defense of emotions. Hume famously wrote that “reason is and ought to be a slave to the passions.” Yet notice that Hume still follows this tradition in bifurcating emotions and reason. This tradition finds continuity in this century with William James’ distinction between emotions as physiological disturbances and cognition. This tradition is similarly continued today among some philosophers and psychologists.

However, there is another tradition dating back to Aristotle. Aristotle had provided a systematic coordination between anger and judgments. In this arrangement, according to Aristotle, it would be irrational for one to not be angry in particular scenarios. Contra William James and the tradition from which he proceeded, this separate route in the history of philosophy has won out in recent times through the cognitivist theorists. A modification of the cognitivist approach might be preferred, however; that is, the admittance that emotions are not constituted just by judgments but include feelings and the accompanied physiological responses. It remains the aim of such a modification to readjust our conception of emotions as animalistic, unintelligent responses. It is the thesis of this approach to suggest that emotions are essentially cognitive. 

Cultural icons

The agitation that is present within the person who finds their self uncomfortable among others—and not among others alone, but even in private—has constructed a picture of their self that is loaded with negation and obligation; “I am not…” and “I need to be…” Those who infuse this projection are other personalities that—even under denial—fit their selves within a particular rubric of exclusive thinking. One does not need to feel insecure to be insecure—there is a certain denial that attaches itself with insecurities that are believed to be normative patterns of thinking and behaving. Because of which, these are not often considered insecurities, and have been encoded through unconscious obedience to social paradigms. What is it that is accepted in one setting over another, and why, runs through our unconscious network; trained to catch cues and assess environments. Why is this a question of interest to us? The way we view ourselves is often relative to how we project that others perceive us, and this in turn is our own projection of them. What engenders the projection? Perhaps pattern recognition; recognition of consistent behavioral patterns that develop into trusted notions of others that then allow for comments at certain deviations such as “that person was not their self tonight.” This pattern is not immutable however. It is not the gradual changes that cause such comments; one who has known a particular person since they were a child and are now fully grown, does not—out of the blue—become dumbfounded by their long gradual change—unless, of course, they have not seen them for a long period of time and so then this gradual change would not appear all so gradual to them. My assessment of this growth is that it out to be one of expansion. I do not think that this entails that one is no longer their self, but expanding the self. Neurological studies suggest that a lot of who we are is not fixed through genetics. The billions of neurons and trillions of synapses are too grand for innate fixation; the brain has a specific degree of plasticity. However, I want to consider the previous thought in light of this plasticity. I want to suggest that plasticity ought to allow room for reconstructing projections. Further, I want to suggest that the normative patterns are constructions themselves; the general inflexion of voice that we share, or posture, and so on. We would not deny the cultural continuity that underlies much of our behavior; deviations are probably sub-cultural. It is not strict sort of thing, but allows for synthesis. These cultural bits can be thought of as acting; that is, much is socially constructed, and we have certain roles, or rather can choose certain roles through plasticity and expansion, and at times become stuck in not knowing our lines or not attuned to the larger drama.

 

 

 

Dismissive Identifiers

The shape of our intellectual rhetoric is well-supplied by a Scholastic heritage. “That Liberal,” “those Conservatives,” “that’s so platonic,” etc. These common examples exemplify what I mean by Scholasticism before letting you know what I mean by Scholastic, directly (using Scholastic tools to disenchant your idea of its project is perhaps evidence of its inveterate hold). Then, by Scholastic I mean the sort of rhetoric that is replete with identifiers that are meant to stand up as representations of a canonical tradition, ideology, etc. Underlying its interest is a competitive possession of orthodoxy by means of exclusivity. This is not suggesting that some things ought not to be dismissed but rather unearthing the extent and manner in which they are dismissed. In certain discursive space, to use particular identifiers will impede all manner of conversation, as this sort of thing does, inherently. Therefore, by representing a unit (of words, ideas, people, institution, etc) as “conservative” would be to implicitly suggest a control over everything being represented. Identifiers, this way, are not all bad, of course; they can provide a platform for conversing, but in the event that we think about the identifiers in the same way. However, it often breeds more negative than positive. Identifiers, normally being used by one in a position of knowledge control (put bluntly) are often being used in a shallow, manipulative way. Not always. Either way, what is important is interaction. The identifier is essentially a name on a box. Once the box is opened the rhetoric may begin to change and the conversation may be rendered more accessible, and helpful.

Computational Model of the Brain

 There are two sides to the metaphor coin, on the one side we commonly speak of metaphors as rhetorical devices that are specific to certain literary genres, primarily poetry, while on the other side of things, the side of the coin that we seem to not look at, we, without knowing, are in constant use of metaphor. As George Lakoff and Mark Johnson suggest in Metaphors We Live By “[o]ur ordinary conceptual system, in terms of which we both think and act, is fundamentally metaphorical in nature…[o]ur conceptual system thus plays a central role in defining our everyday realities.”[1] At some point these conceptual systems take their root, and ought not to be considered inveterate. Lakoff and Johnson are calculated in their terms, notice that metaphor is not about one to one correlation of a thing to another thing but is a way in which we go about conceptualizing a thing. If this is the case, there are going to be points for which the metaphor will not entirely explain the thing for which it is functioning metaphorically. This is elementary stuff to be aware of to begin thinking about the brain as computational. In what way has the computational metaphor explained the workings of the brain? In what ways does the metaphor fail? To emphasize its failing is not to suggest that it does not work, but to assist in understanding the relationship that these things have with one another in our conceptualization of them. For one thing, one might hear as an objection to this conceptualization an emphasis on the “clear” ways in which a brain is not a computer (or not a computer that we have now); this can be quickly tested by pouring water onto a computer and then onto a brain. The short-circuiting reaction of the computer will be an obvious indication of the distinction between the two things. However, this is not what is meant when conceptualizing the brain in computational terms. Further, some might object to the notion because they will insist that they are being reduced to all sorts of terms that are not a part of their normal conception of their humaness, and will charge you for proverbially “sucking the humanity from them.” It should be clear that the computational model does not mean that one should picture a modern day computer as a brain, or the reverse. This appears as a common misconception. Instead, the computational model, functioning as a metaphor, has a specialized computing function in mind. The sort of computation that is in mind in this model is something along the lines of the Turing Machine[2] described by Alan Turing. A machine that manipulates symbols to compute input and form output. This model, in various forms and modifications (not all derived from the Turing Machine), is prominent in Neuroscience and its methods. What sort of trajectory has the computational model placed Neuroscience in methodologically, and conceptually?


[1]George Lakoff and Mark Johnson. Metaphors We Live By. (University of Chicago Press: Chicago, 1980), pg. 3.

[2]This is indeed the model which is in mind when talked about in Philosophy of Mind. However, someone like Jerry Fodor will point out the obvious flaws that even this idealization has as a model for the brain. Essentially, Fodor argues that the Turing Machine is a closed system while organisms are in a constant interaction of exchange with their environment.