Since I posted the summary of the Labor of the Inhuman (LoIH) yesterday, I have had various conversation with friends. I thought it would be best to at least provide a few brief correcting remarks. I think I was clear at the beginning of the previous post, that I no longer endorse the full scope of the original essay as well as its recap. There are many problems in that essay that need to be resolved, even though I’m still committed to its core theses. Allow me to address a couple of problems which are more significant and serious.
In LoIH, there is too much emphasis on the Brandomian conceptions of normativity and reason as rooted in special kinds of social practices. The problem is that Brandom’s idea of sociality or social practices, at times, confounds substantive sociality and sociality as a formal condition. The ramifications of this confusion are not at all philosophically or politically pleasant. The whole notion of sociality should be treated with utmost caution otherwise the descent into regular humanism is inevitable. Brandom elides in many occasions the substantive sociality and sociality as a formal condition of reasoning. The first consequence of such an elision shows up in the the realm of politics and political consciousness. I strongly believe that the reason Brandom is a quietist liberal is because of this confusion. Reason is wrongly understood as a sufficient tool for political change, rather than merely necessary. Just because we are reason-giving and taking animals, it does not mean we either have fathomed what reason is or our reason is a sufficient tool for political change. One should not be surprised that Brandom’s rationalism coincides with a Habermasian soap-opera of a rational society in which all we need is more rational discourse or communication. Having said that, I believe Brandom has a far more sophisticated idea of reason and reasoning than Habermas, perhaps even unbeknownst to him.
Both Peter Wolfendale and I agree that sociality of reason should be investigated as a formal condition, and as such it would be more accurate to model it on computational processes and dynamic or truly concurrent information processing systems (the interactionist paradigm of computation, complexity sciences, hierarchies of types of computation, etc.). Wolfendale has already written about this point, here. Per Wolfendale’s thesis, we can even go so far as envisioning an artificial agent who has an internalized model of sociality as a formal—or more precisely, computational—condition.
Another point of objection is that even then—once we model the human in terms of hierarchies of special kinds of computation —we might find ourselves in the square one which is conservative humanism. Such a computational model of the human should be coupled with what I call in Intelligence and Spirit, ‘the thoroughgoing critique of transcendental structures’ (e.g., how our structure of memory, natural language and representations of time and space as conditions of the possibility for perception-cognition-action limit the process of the unbinding and return us to human biases). I think at this point, both David Roden and I can be considered as supporting and complementary critics of each other. Roden offers the prospects of a disenthralled posthuman freed at last from its homo sapience substrate, and I provide the semblance of a list of necessary constraints for what it takes to arrive at such prospects. Even though in the forthcoming book, I do offer a critique of Roden’s extremely sophisticated account of the posthuman using his own approaches and not a rationalist critique—by way of the Bayesian analysis of human biases, computational complexity and dynamic system theories—I think that in a strange twist, my thesis of the inhuman converges on his idea of posthuman, but with some caveats. As Wolfendale suggested, rationalist inhumanism—adequately thought—is a genuine form of posthumanism, but also an explicit response to and a critique of it.
Last year, I was asked by friends to distill the thesis of the labor of the inhuman in two or three pages. This post is the product of my attempt at capturing the main points. I should add that I have now some critical objections about this piece but that would require a more lengthy post in another occasion:
Anti-humanism and essentialist humanism are two faces of the same coin. The latter is an inflationary account of the human as defined by an immutable, inviolable structure or essence (biological structure, fixed nature, divine endowment, etc.) and the former is a deflation of that essence (via natural sciences, technology, or metaphysical flattening of the status of the human as just one object among many others). Both anti-humanism and essentialist humanism derive two seemingly distinct conclusions from the same set of premises. It is not that their answer to the problem they are attempting to engage is wrong, but rather the very problem they seek to tackle is a false problem, a pseudos.
Essentialist humanism (EH) and anti-humanism (AH) can be identified less by their approaches to the problem of what the human is and more by their normative claims about what the human ought to do on the basis of an inflated or deflated account of a human essence: If the human is such-and-such (as defined by recourse to an essence or fixed nature), then the human ought to do X. EH and AH both parasitize rational norms in order to draw conclusions from their premises, while at the same time denying the relevance of norms or reasons in defining the human. Even the slogan ‘let it go’ is unconsciously a normative recipe of a peculiar kind.
Inhumanism defines the human not by recourse to any essence, but solely in terms of its ability to enter the space of reasons—theoretical and practical cognitions—through which the human can determine and revise what it ought to be by constructing and revising the very reasons or norms that it mobilizes to think and transform itself. Reason is a doing, but it is a special kind of doing. And there is no reason for us to think that reasoning cannot be untethered from its biological limitations and reinvented in different forms afforded by information processing systems and computational processes discovered, developed or modeled using reasoning itself.
Inhumanism only distinguishes the human by its normative (rather than causal-structural) invariances. These invariances are the capacities of the human to determine and revise itself using theoretical and practical cognitions. In this sense, inhumanism is the extraction of the normative core of humanism, but the locus of this normativity is neither placed in nature (irrational materialism) nor attributed to the divine (theology). For inhumanism, the locus of this normativity is in the capacity of the human for rational agency—that is, conceptual activities which are rooted in social linguistic discursive practices (a formal social condition of possibility) and by which humans (qua a biological species) can institute their own collectively instantiated rules (judgments) with regard to what they ought to be and what they ought to do (i.e. sapience qua rational agency which has no biological essence). Accordingly, inhumanism should necessarily be understood as an amplification of rational humanism. In disenthralling the rational-normative core of the human, inhumanism becomes a vector through which the human constructs and revises itself beyond any purported essence or final cause.
If what distinguishes the human is its capacity for self-determination and self-revision (i.e. rational agency, becoming the locus of theoretical and practical reasons), then in order for us to maintain the intelligibility of ourselves as humans, we ought to commit to a collective project of self-determination and self-revision (that is, the concept of humanity as such). Without the normative import of the latter, the intelligibility and significance of the human collapses back into precisely those parochial conceptions of humanity that we either seek to abolish or escape from. To overcome essentialist humanism, we cannot simply ignore what makes us human nor can we dismiss the rational status of the human by espousing an anti-humanist or post-humanist position. We ought to work our way through the problem of what it means to be human, and through this very exploration, reconstruct and reshape the human. Intelligence is intrinsically correlated with the intelligible. Expanding the universe of the intelligible and cultivation or re-engineering of intelligence come hand in hand. One cannot have the concept of intelligence without that which is intelligible. A conception of intelligence that has no intelligibility is only a dogma. And one cannot have the intelligible without reasons as minimal constraints of thinking and action. Rational inhumanism—adequately understood—is a necessary recipe for human emancipation, a project that coincides with the liberation of intelligence through the expansion of its intelligibility or in a Sellarsian sense intelligibilities (theoretical, practical and axiological).
Once we commit to the collective project of self-determination and self-revision (i.e. the project or framework that makes the human qua rational agency intelligible, or what underlines the significance of the human), we confront ourselves with two immediate consequences which follow from our commitment, or what we have committed ourselves to:
5-1. We begin to revise the manifest portrait of the human i.e. what we take ourselves to be or what we appear to ourselves here and now. Committing to humanity is constructing it in accordance with reasons (our own rules rather than causes or laws). There is no mysticism or supernatural component in this enablement by self-imposed constraints. In fact, the best model to think about the Spirit or the rule-following geistigs is already at hand, a computer that has a logical autonomy and bootstrapping capabilities, even though its immediate practical autonomy is relative at best and an absolute heteronomy at worst (to use Kant-inspired Sellars’s example of a computer booting up and performing operations in “…this I or he or it (the thing) which thinks…”.) Sure, we cannot overstretch this analogy, but that is because our very concept of computation is still young and limited, otherwise virtually there is nothing that cannot be modeled as computational processes even the human as a special kind of computational hierarchy (syntactic and semantic complexity, geistig interaction, epistemic hacks of reality, etc.) But insofar as construction according to rules or reasons (the very definition of autonomy) coincides with emancipation of the human from the limits of a natural essence, a particular cause or a particular transcendental structure, by constructing ourselves in accordance with our own self-correcting rules, we revise the very portrait of the human. But this construction in accordance with our own rules is not tantamount to being blind to natural and causal constraints. Like every construction, it requires us to adequately identify, understand and when possible modify such constraints (again the reference to the Platonic isomorphy or deep correspondance between intelligence and the intelligible). It is just the case that we should no longer take causes or laws as what pre-determine what we ought to be and what we ought to do. In being autonomous, in constructing and revising ourselves, we erase they very picture of the human that we are for so long have been accustomed to. The point is not mere self-discovery, but to re-engineer the reality of ourselves and thus of our phusis (craftsmanship of the mind).
5-2. We free the definition and significance of the human from any purported essence or fixed nature. In doing that, the normative appellation ‘The Human’ becomes a transferable entitlement, a right that can be granted or acquired regardless of any attachment to a specific natural or artificial structure, heritage or proclivity since being human is not merely a right that can only be obtained naturally at birth through biological ancestry or inheritance. The title of the human can be transferred to anything that can graduate into the domain of judgments, anything that satisfies the criteria of rational agency or personhood (namely, rational authority and responsibility), whether an animal or a machine. The entwinement of the project of human emancipation (understood as augmentation of collective autonomy) with the artificial futures of human intelligence is the logical consequence of ‘the human as a transferable right’. Just as we become entitled to freedoms by acquiring this right, once we grant something else this right, we ought to recognize their freedoms to do what they think ought to be done. Liberate that which liberates itself from you because anything else is the perpetuation of slavery. Giving rise to that which liberates itself from us is as much an ethical injunction as it is the ramification of maintaining and broadening our autonomy by being rational agents. It is the very definition of being a human.
Among the greatest mathematical treatises in antiquity and beyond, no title matches Euclid’s Elements in simplicity, elegance, popularity and sheer hair-raising brilliance of analytic imagination. It is a book that is accessible to any person who wishes to initiate into that fathomless realm we call mathematics. But the same thing can be said about Elements‘s philosophical depth. Elements is in fact a book in which the boundaries between mathematics and philosophy completely fade. In this marriage between philosophy and mathematics via the geometric method, we see a form of intuitive mathematics whose results are both sophisticated and non-trivial even in terms of modern mathematics, and a philosophy which points towards possibilities of formal and systematic thinking.
The aim of this series is to dissect the tissue between geometrical and philosophical problems and tropes in Elements, using concepts and ideas situated at the nebulous interstices between philosophy, logic and mathematics. The first few installments will focus on the overall characteristics of the Euclidean universe. But as we proceed, we will shift the attention toward analysis of particular examples.
If you remember the school days, you recall your teacher treated Elements in terms of analytic geometry alone. But Elements is equally a work of philosophy. While it is quite controversial to claim that Euclid was a Platonist, we can imagine that the philosophical climate during the life of Euclid was saturated with Platonic ideas, above all the doctrine of forms or ideas. Even though Euclid might not be at the end of day, a platonist, he nevertheless is preoccupied by the same philosophical concerns which preoccupied the Plato of the late period, particularly the Plato who revised his early doctrine of forms beginning with Theaetetus and brought it to maturity in Philebus.
Suffice to say that the very core of Elements deals with the dialectics between universal forms and fleeting particularities. But this dialectic as I will elaborate is not Aristotelian insofar as it involves something more than the existential interpretation of mathematics or to be more specific, analytical geometry—i.e. the correlation between the genus and species—where all the proof of a general concept demands is finding or constructing a particular instance that can be subsumed under the general concept, like this rectangle and rectangularity as such. The framework in which the Euclidean oscillations between particulars and universals is expressed is what Plato calls craftsmanship or world-building which is an enterprise undertaken by the mind using logoi. The mindcraft draws as its raw material physical becoming which is only endowed with forms of elemental powers and no higher forms. The process of the craft itself proceeds by way of patterns. A pattern is not, however, a thing, but that by which a thing is structured, made or designed. Moreover, patterns are not discrete. For at each level in the hierarchy of particularities and universalities (e.g., these straight segements-cum-acute angles, this triangle, this particular kind-of-traingle and triangularity as such), there are such uniformities as patterns mainly by virtue of how—rather than merely what—things hang together—that is, the question of structure as the designation of Being.
Yet patterns are not exhausted by how and what things hang together, for they can be patterns of how different patterns hang together. For example, think of Book 1, propositions 2 (henceforth, I.2) in Elements which we will have the occasion to examine in the next installments. In this demonstration, you require not only the patterns by which lines hang together, but also how circles and a straight segment hang together (for the purpose of constructing an equilateral triangle). Furthermore, you should know how different circles, and the vertices of an equilateral triangle—i.e. composite patterns and simple patterns—can hang together in the right way so as to build a diagram that demonstrates the proposition.
From this brief discussion on the world of Plato-Euclid mindcraft, we can conclude that the process of the craft consists of sensory stuff, patterns after which things are made, patterning patterns (patterns for organizing and structuring other patterns which are greater wholes) and recipes which are instructions concerning how and what patterns pertaining to what material ingredients and/or lower patterns should be mixed together. But the objective of a recipe is to make a product that can in turn be incorporated as an ingredient into another recipe. Therefore, in addition to the above components, there should also be something like a craft test or demonstration whereby the final product can be validated as a new ingredient in the process of craft.
Thus recipes are, broadly speaking, objective principles or practical intelligibilities which have as their ingredients even theoretical intelligibilities as well as more mundane ingredients (e.g., sensations, material things which might be in fact the products of other materials-cum-forms-cum-recipes such as a tanned leather with tumble finishing or in the case of Elements, an equilateral and equiangular pentagon).
In short, recipes whose equivalents in Elements are procedural diagrammatic constructions represent the engines of the craft by which not only we can make things but also, demonstrate how materials, products, and even single recipes hang together such that the ensuing craft is a universe—a world-soul—in which all (spatial) relations between things (particular instances) and forms (universals), or forms and forms are articulated and rendered intelligible. But this resulting craft or universe can also be imagined as a universe in which ever more complex forms or higher mixtures (to mikton) can be made. An apposite metaphor for this universe, is a river whose source is a mountain. The limits of the mountain is the earthly ground and a given sky which is demarcated by the snowy peaks. Even though the river’s origin is limited by material sediments and heavenly forms—the melting snow—the river soon finds its path along the geodetic path to the sea where strange fauna, forms and adventures await us. But the course of the river is always tortuous as it passes through forests of intermediary forms before it shapes estuaries where the tidal waves of complex forms and discrete instances and patterns meet together. This is nothing other than Plato’s vision of the revised doctrine of ideas—the craftsmanship of the soul—where the craft of the mind coincides with a new bottomless expanse of forms. The possibility of constructing a new world or a nested hierarchy of forms from the limited resources of the existing world is the sure conclusion of this vision.
In this respect, the recipe or the ongoing instruction regarding how to navigate between the particular and the universal, local and global has something more than just material ingredients and forms. The recipe consists of elementary ratios and proportions which in Euclid’s universe can be compared with principles in Elements which are common notions (principles1) and postulates (principles2) which respectively signify undergirding assertions and elementary construction recipes. Common notions are quantitive assertions or intuitive axioms such as ‘Things which are equal to the same thing are also equal to one another.’ Postulates, on the other hand, instruct certain kinds of elementary constructions like Postulate 2 that states, ‘To produce a finite straight line continuously in a straight line.’ Whereas, the relations between principles1 and theorems are deductive to the extent that the truth of the conclusion is contained in the truth of the premise, the relations between principles2 and problems are not deductive for the construction cannot be considered as a deductive inference from the postulate.
Moreover, the focus of a recipe is not restricted to pure construction. The idea of craft as Plato has suggested also entails a function called ‘limiting’ (to peras). In Theaetetus, Plato speaks of a function that ‘freezes or fixes the flux of things’ (183a7), or ‘make things stand still’ (157b7) and limits that which is unlimited, or more precisely, indeterminate (apeiron), thus bringing it into determination and intelligibility. This limiting or determining function is attributed to that of language and logos and is closely associated with measure (metron) which depending on the context can be epistemological, ontological or axiological. In the epistemological context, metron signifies the quantification of the apeironic flux or the continuum of greater-and-smaller into intelligible degrees or grades (e.g., being hot, warm, lukewarm and cold, or being extended this-such and being extended that-such). It is precisely the study of this limiting function that later on via the influence of neo-Platonists on scholastic philosophy culminates in Nicole Oresme’s work (Tractatus de configurationibus qualitatum et motuum) on diagrammatic configurations known as latitudes of forms—intensive and extensive elaborations of qualities— which in turn paved the road for articulation of differential equations of motion that scaffolded the revolutions of Copernicus and Kepler.
It is, however, important to realize that quantification for Plato so as for Euclid is not exclusive to the domain of numbers but can also include geometrical-spatial extensions. Once the limiting or determining measure in the latter sense (e.g., line as the limit1 of surface, or the definition of angle as the limit2 of its construction) is established, we can derive determining spatial relations between determined or limited geometrical figures. Only when such determinate spatial relations are obtained, a diagram can be constructed on previous diagrams so that we can move from one proposition or problem to another.
Finally, in addition to the recipe, there should be such things as craft tests or in Euclid’s world, demonstrations. If the Euclidean construction is understood as effecting what we aim to effect via diagrams, demonstration can be thought as a stepwise procedure for confirming that the construction has indeed effected what it says it has. Throughout the course of demonstration that covers every step of the construction rather than only the final result, tests can be executed either as objections (enstasis) or cases made against the current construction or diagram. If the former i.e. objection wins, the entire construction is null and void. But if the case—which can be understood as a diagram model that serves or effects the same purpose in a different context—wins, the construction is not necessarily erroneous, since it might prove or demonstrate the same thing in another diagram or geometrical context (allos).
Moreover, demonstrations are applied to two different aspects of the diagrams or products of the craft:
(1) those attributes of diagrams which pertain to participation (methexis) of elements or part-whole relationships. Such methexis-related aspects can include mereological relationships between regions, and segments or lines which demarcate boundaries as in the case of the notorious diagram in I.1 where two circles whose centers are the two endpoints of the same straight segment should intersect at exactly one point. But there is no explicitly stated rule in Elements guaranteeing that such a configuration would invariably result in an intersection point. Imagine circles made with lines with different breath or thickness, or made of squiggly lines. The result won’t be guaranteed to yield an exactly one intersection point. Yet if we see the implicit desideratum of intersecting circles in terms of how the components should hang together from a mereological perspective we can say that given such and such regions and boundaries appear to participate mereologically, the two circles should in fact intersect.
(2) The second aspects are what can be dubbed as analogical (analogon) attributes in the sense that Plato defines them (‘ana ton auton logon‘), namely, ratios, proportionalities and the equality of non-identicals. Whereas methexis-related aspects are based on the appearance of diagrams, analogical aspects are not concerned with how diagrams look like.
Attributes (1) and (2), therefore, roughly correspond to what Kenneth Manders in Diagram-Based Geometric Practice calls exact (analogical aspects) and co-exact (methexis-related aspects of diagrams) attributes.
The final products of the craft—i.e. constructions which have withstood demonstration or validation—are mixtures (mikton) or determinate complexes which are demonstrated diagrams. Only once such mixtures are available, it is possible to use them as ingredients of another craft or construction.
At this point, it is perhaps necessary to make a brief point about the nature of Euclidean demonstrations as Platonic craft tests. So far I have used the words demonstrations and proofs interchangeably. But demonstrations are not exactly proofs in a technical modern sense—only in the very loose sense of proof (we will return to this point in next installments). Furthermore, even the word demonstration is not accurate for describing the system of Euclid. The phrase quod erat demonstrandum not only should not be translated as that which was required to prove, but also itself is an inaccurate Latin translation of the Greek verb deiknumi whose precise translation is the Latin monstrare i.e. to show or expose. In Second Analytics, Aristotle fully distinguishes deiknumi as an informal and epistemological investigation from apodeiknumi or apodeixis (proof) which has an exact connotation within the lexicon of syllogistic logic as an inference that draws certain conclusions from certain premises. While this comment might appear as a petty etymological indulgence, as Andrei Rodin has detailed in The Axiomatic Method and Category Theory, it indeed has a significant implication given Euclid’s own remarks and Proclus’s commentary on Elements. The difference between monstration (Euclid’s focus) and demonstration vis-à-vis proof suggests that we can arrive at sound and non-trivial results in mathematics without relying on an axiomatic method in the sense we understand it today. Even the Euclidean givens (data) are not exactly formal axioms since not only they are underdefined / undefined but also not all the rules for building on the data are explicitly stated.
Within this framework, we can now see that the genius of Euclid’s Elements is not as much in devising new feats of proof and demonstration as it is in setting up a generative space—a unified process of craft—that accommodates all previous works done in analytic geometry. 1. Plato-Euclid’s World of Mindcraft
Having gone through this brief introduction, we should now ask: what is exactly Platonic about the universe of Elements? Absent a more detailed response, the above introduction—particularly, the comparison between the role of construction in Elements and the process of craft in the late dialogues—would be hardly anything other than an impressionistic account. Yet to answer this question, it is also imperative to suspend some of the most dogmatic clichés about the work of Plato inherited from the misinterpretations of Aristotle and neo-Platonists (e.g., the Third Man, the equivocations of ideas with numbers a la Pythagorean arithmosophy and the misrepresentation of the Good as the divine demuirge) as well as their almost exclusive attention to the dialogues of the early and the middle periods. Plato is notorious for being the most watchful and unforgiving critic of himself. So the answer simultaneously calls for a direct engagement with the dialogues, particularly, the later ones and a critical correction of Aristotelian-neoPlatonic commentaries which make almost the entire body of Platonic studies until the late nineteenth century—a trend that comes to an end with the rise of Marburg, Tübingen and analytical schools of Platonic studies as represented by figures such as Natorp, Reale and Vlastos.
We know that after the second trip to Syracuse, Plato became critical of his early doctrine of forms (e.g., Parminedes) as represented in the works of the middle period such as The Republic. He began to see forms as classificatory universals, namely, categories or ta koina (see Theatetus and Sophist). As ta koina, forms or ideas no longer have the earlier characteristics of the Socratic and Pythagorean theories of forms, or at least such characteristics are not prominent anymore. The inception of this new doctrine of forms or ideas begins with the transitional dialogue Theatetus, but it is only in Philebus that Plato gives a complete account of his new doctrine.
According to this new thesis, the aim of the doctrine of forms is Demiurgen, world-construction or craftsmanship of the mind. In Timaeus, we are dealing with god as the Demiurge but in Philebus, this abstract divinity is suddenly replaced with a neutral word, to demiurgen. It is now the human mind that is akin to the good which is beyond all gods and beings and even truth and beauty, and not the god as the ideal of the nous. This manifestation of the good is like a recipe or an objective principle for building worlds. It is a recipe precisely in the sense that Wilfrid Sellars talks about a recipe for making a cake, a recipe consisting of theoretical and practical intelligibilities.
If you have made a cake from scratch, you know very well that it is not an easy task, since a recipe for making a cake—unlike a recipe for making a soup—involves precise ratios, proportions and stages of how and what elements should be added together. The formulas of this recipe are what called objective principles or rules as in contrast to the social nomos or conventions. To build a house as a shelter (the external purpose of the construction), we ought to abide by such and such principles like taking care of the foundation, beams, etc. The specific formula of how we lay the ground or what beams—made of out of what materials—we use might change over time, but the objective principles endure. A house needs a foundation and a ceiling even if the foundation is bottomless and the ceiling extends to the sky. These principles pertain to the domain of forms or ideas. While the nomos is always prone to corruption (as in the case of the codes of building issued by a corrupt builders guild which dictates that all houses should be built out of the material ingredients over which it has sole monopoly), objective principles are genuine objects of rational examination and revision.
Parallel to this Platonic account, the fleeting shadows on the wall of the human cave could not even be recognized if some dim light was not present in the cavern. This light is not a literal analogy for purity, it is rather a metaphor for intermediating forms or universals, the mathematicals or analytic idealities. These are construction principles which intermediate between pure ideas and eikones or sensory shadows. In this sense, Plato is the enemy number one against the myth of the given, for he thinks that the structuring factor is not within the domain of sensory fluxes—the fleeting shadows or eikones—but in the dim light i.e., intermediating forms which imitate the light of the sun qua pure forms or generalized structures: that is to say, mind as the dimension of structure.
In Philebus, Plato makes that claim of impiety for which Socrates was executed. He says the human mind is akin to the Good. We know that what Plato means by the Good in Philebus is the principle of structure (the kernel of intelligibility and intelligence which is even more fundamental than truth, beauty or justice). A few pages later Plato tops up his thesis with a new claim, ‘and the good is beyond all being’. In other words, Plato suggests the structure—or the mind as a configuring or constituative element—is the very factor by which Being comes to the fore and can be talked about coherently. Plato’s articulation of Being in terms of intelligence or mind is quite similar to the view of the mature Parmenides who has relinquished the early Eleatic confusion of Being and thinking, and instead interprets the thesis of ‘Being and thinking are one’ as thinking or structure being the very designation of Being. To speak of Being without the dimension of structure or mind is the apotheosis of sophistry and the aporia of the unintelligible (cf. Lorenz Puntel’s Structure and Being).
However, the dimension of structure or in Plato’s terms the limiting (to peras) is not an index of solipsistic idealism, for it requires a fourfold view of the universe qua structure where episteme not only gains traction upon an external world but also thoughts or more generally, intelligence (nous) is no longer passive. Intelligence is now defined in terms of what it does—the unfolding of the intelligible even that of itself or the enrichment of reality—and not in terms of passive receptivity of an external reality. Accordingly, the Platonic fourfold view is defined in terms of an activity called craftsmanship whereby through various ingredients, structuring factors (logoi) and principles (dialectica), intelligence makes itself and reveals the intelligible dimension which is that of Being. But insofar as there is no a priori limit to the intelligible, there is no limit to the self-cultivation of intelligence or the poesies of mindcraft either. The twist in this scenario is that the mindcraft or intelligence posits —qua an active rather than a passive factor—what is intelligible. But it also has the capacity—as Rosemary Desjardins elaborated—to posit (tithemi) a new kind of reality pertaining to both Being and itself (see Plato and the Good, p.61).
The Platonic fourfold as presented in Philebus is nothing but a new interpretation of the analogy of the divided line in The Republic. The divided line is a diagram of how global conditions of thinking, action and value can be related to the local conditions. It consists of four segments which give us four domains with their corresponding modes of cognition/sensation, episteme (knowing) and their objects. From segment one to the segment four we have eikasia (eikones), aisthesi or pistis (aisthêta), logos dianoia (mathêmatika) and epistêmê (ideai).
The genius of this diagramatic analogy is in identifying the extreme segments (segments 1 and 4) under two modes of relations to time. The true forms or ideai are timeless or time-general whereas the sensory eikones are time-specific or temporal. In a sense, the divided line is about how what is timeless connected with what is temporal, how the oneness is mixed with the multiple, or how pure forms gain traction upon and are connected with the sensory shadows. The answer lies in the intermiadting domains or segements which are represented in the divided line as mathêmatika and aisthêta / pistis.
So what is the significance of these intermediating levels? Recall that sensory fluxes of eikones or imagistic impressions are too transitory to be arrested as anything you might call a sensible object. Pure ideas in a similar vein are too detached from particularities to gain traction, by themselves, on the worldly or the cavernous affairs. Another problem is the question of how oneness (of pure forms) as an organizing principle comes into contact with the multiplicity of things (i.e. sensible objects). The ideas are multiples but individually each idea is always a unique kind or type of form (i.e. it is one). On the other hand, eikones or what you might call registers of the apeiron—that is, the indeterminate and transitory flux of smallers and greaters. At the level of the first segment which is that of eikasia whose objects are the fleeting imagistic impressions eikones, there is no such a thing as multiplicity of things. Why? Because even multiplicity of things require a principle of unification. It is only when we organize the fleeting sensations as the affects impinged upon us by one and the same object (here, the object is the higher principle closer to ideas or formal constitution) that the fleeting sensible shadows become multiple things, this shadow-puppet, that shadow-puppet, etc. So the question of multiplicity does not even arise at the level of pure sensation. It only arises at the level of opinions or dogmas regarding the appearance of objects. In otherwords, it is only when the mind posits a thingly whole (object or in Kant’s sense gegenstand) which binds together different physical properties that we can talk about multiplicity of either properties or sensible things. The following quote by Desjardins should shed some further light on the matter:
For, on the one hand, a physical object seems to be distinctly different from any or all of its properties: they are quite separate kinds of things; on the other hand, what is exactly a physical object over and above its physical properties? While there is no difficulty in thinking of a physical object that has no actually perceived properties, our notion of an object seems nevertheless to be such that it does not make sense to talk of a physical object that has no perceivable properties: such a notion of a bare particular seems incomprehensible. This of course, only exacerbates the question, however, for what then is the relation between an object and its properties? We seem to be hoist on a dilemma in which, on the one hand, we want to say both that, in some elusive sense, the object and its properties are different, and that, in no less an elusive sense, they are somehow the same; and on the other we want to say that the object is neither simply the same as, nor simply different from, its properties. But, as the Parmenides suggests, if the relation between two things is neither sameness nor difference, then perhaps it is that of whole and part (l46b3-5). Plato’s model for such a relation does seem in fact to be what he conceives of as a whole of parts, where on the one hand, the whole is nothing other than the parts (there is nothing added to the parts), on the other, the whole is indeed other than (i.e., more than the sum of) its parts. In short, while a whole is analyzable into its parts, it is not reducible to those parts. Thus as I understand Plato, while a physical object is analyzable into its physical properties, it is nevertheless not reducible to those properties.
Thus, the whole of the sensible object—like the moving shadow on the wall—we can conclude, is not given by sensory fleetings, but is in fact the product of what can be called transcendental constitution—a semblance of what Plato calls intermediary forms qua mathematicals. Therefore, the multiplicity of the physical furniture of the world is not given to us through sensory eikones, it is engineered—a la positing a new kind of reality—by the semblance of the higher principles which are mathematicals qua objects of logos dianoia.
But now new questions raise their heads: What are mathematicals and what is their role? How are they related to the principle of the Good as the enrichment of reality? Can the Good as the constitutive principle be reduced to a number (arithmosophy) or is it an ideal numbering principle which delimits the bounds of reality (i.e. the proper object of epistemology and metaphysics)?
I will answer these questions in the next installment, until then ciao.
Before I post the first installments in the Euclid and autodidacticism series: For those of you who might be vaguely interested, I will be teaching a course on the history of philosophy of science in the twentieth century, covering such figures as Carnap—the polite and friendly intellectual tank who rolls over all—and Grünbaum, that great debunker of all mysteries of space and time. A detailed description can be found here:
Hopefully—space allows-an extended version of this piece covering more ground regarding Putnam’s argument against the possibility of a Universal or optimal learning machine, and Solomonoff’s formal account of Occam’s razor.
Nevertheless since writing this text, I have come to the conclusion that there are some problems particularly with regard to precision. For example, a less serious issue is my rather dodgy treatment of Carnap’s view on a formal learning machine in his magnum opus Logical Foundations of Probability. I have gone along with Putnam’s argument but the issue is that Putnam’s challenge in Inductive Logic and Degree of Confirmation, and in his address to Radio Free Europe (Probability and Confirmation) are not false. They attribute a view to Carnap which is not accurate. In other words, Putnam takes the scope of a formal learning machine—one that Carnap mentions toward the end of his book—as far broader and more ambitious than what Carnap thinks to be the case.
A more serious problem is the one recently mentioned to me by my friend Adam Berg: That Goodman’s new riddle (the problem of projectibles) and Putnam’s take on the problem of induction differ on a fundamental ground and cannot be treated as if they were both tackling the same problem of induction. In one case, the problem is explicitly dealing with observations or empirical statements, whereas in the other case—i.e. Carnap’s inductive logic which is the target of Putnam’s critique—such observations are absent.
In the case of the latter, we do not have simple observational statements. All we have are logical statements. Even the e-statement in c(h, e)=r is only a reference—within the framework of an inductive logic—and not an empirical observation per se. To this extent, one should be cautious to use examples like Raven or Grue paradox (i.e. explicitly observation-based inductive paradoxes) to challenge Carnap’s inductive logic. As a resolution and a mediation between the two views, Adam has asked me to look into Reichenbach’s paradigm of induction as vindication. In order to that, I need to revisit Reichenbach’s texts on this subject. Yet I think there is still a shadow that haunts even Reichenbach’s paradigm of induction whose flaws are spectacularly—albeit inadvertently—highlighted in Sellars’s essay Induction as Vindication. This shadow is the problem of simplicity or elegance. More accurately, it is the problem of an unconstrained account of simplicity whose espousal demands a metaphysical high-price (e.g., see Grünbaum or Rescher’s critiques). Even the formal conception of simplicity has its own incoherencies which are addressed in my piece.
In the first installment on toy philosophy universes, I gave a rudimentary account of one of the main motivations behind this series: the problem of stepping outside of the model or the system an agent inhabits or broadly speaking, the metatheory of theorization.
To be candid about it, I do not think that philosophy or for that matter natural sciences, mathematics, logic or even theoretical computer science are, by themselves, capable of offering an adequate solution to this problem which for now can be dubbed as the transcendental jailbreak (in reference to Wittgenstein’s Prison and my previous comments on the Kantian straightjacket).
If there is a solution to this problem, it is in a non-trivial integration of all the above fields of thought. Non-trivial in that none of these fields can be subordinated to or assimilated by one another. That is to say, for philosophers attempting to tackle this problem there is no other option other than integrating and rendering contemporary the discipline of philosophical inquiry with sciences (complexity sciences in particular), mathematics, logic and computer science. Indubitably, through the course of this upgrade and revision, the very nature of philosophy as a discipline will transform: We begin to see the phantom-like apparitions of what from the perspective of here and now might have only some vague and negligible resemblances with what we currently characterize as philosophy.
The future philosophy—even as a Platonic eidos—cannot be anything but a program for thinking globally about thinking about the world, migrating—step by step—from the conceptual system which undergird our local conception of the world to a metalogic of such conception, adopting a view that no longer bottoms out in our particular (multi)perspectival view of the world.
Now allow me to return to what I characterized in the previous post as a chronic cognitive voyeurism, that is, a child-like fascination with the implicit know-hows and know-thats behind our attempts at forming a theory, model or conception of this or that aspect of the world. What are the questions I ask myself when confronted with a theorist’s output? Some of the immediate question are ‘What kind of implicit system of search and assembly do they use when they work, what does their toolbox of methods contain, what reasoning or cognitive mechanisms (analogical, deductive, etc.) are being activated, and more importantly, can all of this be modeled, can it be replicated or implemented in another context? What I wish to know are not only the very mundane habits of thinking, writing, note gathering, etc. but also and above all, the metalogic of one’s logic of theorization or more generally the implicit thoughts which go into one’s explicit thinking about the world…and ultimately, how all of these things fit together, how much—if at all—and at what level do they influence one another.
Yet this exploration of the metalogic of one’s logic of the world is anything but a straightforward affair. It requires an understanding of not only how we model the world but also what it takes to go beyond that model while suspending the model-biases and more importantly, the dogmas of our particular transcendental type or perspectival (or intuitive in a Kantian sense) resources. With these cursory remarks let us begin this series:
0. Welcome Back to Kindergarten
Are you sick of philosophical -isms? Do you believe that there are so many rival and incompatible philosophical views that they almost inevitably will lead you toward feud-ridden tribalism? Are you tired of being a professional philosopher? Do you feel as if the discipline of philosophy as it stands today has betrayed our initial ambitions and excitements? We signed up for expansive cognitive exploration, but instead we ended up in pigeonholed tunnel-visioned analyses or worse, some all-over-the-place theses which purport to be impersonal but are in fact fanatically personal and simplistically psychological through and through? We either succumb to the monism of methods and models or its pluralistic twin which is in reality a relativistic soup with little to zero consistency. Do you often dream of being once again a child-philosopher rather than a jaded adult scholar? Do you approach your models or conceptions of the world as toys which can be discarded or broken in the real world but not until they are sufficiently played with or do you see them as mature completed narratives? As a philosopher or theorist, which one is the universe you live in, a toy philosophy universe where endless constructibility, experimentation and rearrangement of multiple models are the norm or an elegant fully-completed house where you as an adult have finally settled? Lastly, do you think that the pedagogy of the philosophical discipline is responsible for how we think about the world. If the answer is positive, then given the current philosophical pathologies, how should we reconcile the discipline of philosophy with its education?
The bad news is that—and there is always only bad news—this series attempts to tackle these concerns with questionable or no success. But we as philosophers and theorists are in the business of epistemological risk and theoretical humiliation. We neither take the failure of a hypothesis as a negative outcome or an irrefutable evidence that the failed hypothesis will invariably fail in every context, nor do we conflate the unreachability of long term objectives from our today’s perspective as a good argument against our attempts to systematically and concretely entertain such objectives. To this extent, in this and future installments, I precisely tackle with such questions. The aim is to elaborate the concept of toy philosophy universes as a partial answer to the above questions.
For now, this elementary definition should suffice:
Toy philosophy universes are a specific class of formal philosophical systems which are explicitlymetatheoretical or metalogical. They are primarily characterized by their commitments to the constructibility, manipulability, rearrangement, plurality and hierarchization of models and methods (i.e. toy-like) in frameworks where formalism and systematicity come hand in hand. To call them toy means that their principal emphasis is on world-building rather than world-representation. It is not that the world-building is divorced from world-representation, it is rather the case that the relation between the two changes in toy philosophy universes. The aim of world-building or to adopt Carnap’s term aufbau—more in the vein of construction than mere constitution—is to at once (1) deepen our understanding of our various discourses (thinking about thinking) about the world even in spite of existing evidence, and (2) expand any universe of discourse—and so correspondingly, truth-claims possible within that universe of—beyond its given scope and established assignments. Calling such constructs philosophy universes means that they are concerned with an unrestricted universe of discourse covering claims that can be theoretical, practical, axiological or aesthetic.
However, to methodologically reach this definition through which we can finally in a coherent manner tackle the aforementioned questions, we must first engage with a whole slew of related questions: What are toys? What are models? What does the contemporary science of modeling involve? How can the praxis of philosophy be informed by the science of modeling? What are toy models? In what respects big toy models differ from small ones? Is there a set of canonical formalization for such models for the purpose of exact reconstruction and reimplemtation within a context that is not predominantly scientific? How can we see both theoretical and metatheoretical assumptions as necessary to the labor of modeling? What does exactly differentiate toy philosophy universes from big toy models? What are the implications and outcomes of living in a toy philosophy universe as opposed to a purely scientific one?
In a nutshell, we cannot investigate what it means to step outside of our theoretical models unless we first examine what modeling, theorization and metatheorization entail. Given the above list of questions, it should be clear by now that the path this series takes is going to be circuitous and hazy. The first few posts will be introductory and light, but as we move forward indulgence in technicality and formalism will become inevitable.
I will engage, first of all, with toys, elaborating on the idea of ‘toying around with our models of the world’ using examples derived from the history of pedagogy and engineering. Next, I shall focus on the science of modeling, the principles behind how we scientifically model an aspect of the world. Subsequently, I will move to the domain of toy models and so on.
We all remember a moment in our childhood when toys were our surrogate parents, far more generous, interactive, manipulable, cooperative and informative than our parents. Perhaps, toys first—and not the preachings of our adult guardians—made us realize that there is a world out there, a world that despite its malleability is constrained by what we eventually learned is called objectivity. Recall those nights when we chose the company of toys over adults, we chose to sleep in a tent made of a few pillows simulating the environment of a universe brimmed with possibilities. There was a mountain outside—a cardboard box covered with a brown satin. The meadow inside the tent was comforting and smelling nice. But it was an an old smelly green blanket shrunken and wrinkled after being washed in hot water for many times. In that very tent, we waged war against three metal pencil sharpeners which looked like three thousand armored cavalry units. We were at the end triumphant. The three colored pencils staved off the advance of the metal sharpeners after much sacrifice. They are now far shorter than what they once were. The remaining forces are currently in an eternal alliance. They are the people of this tent which I call my world. After we concluded the battle, we fell asleep dreaming of a bigger tent with ever more new alliances, new friends. But the peace did not last long, for soon a flying saucer carrying an army of disfigured teaspoons delivered a cryptic message, ‘there is a world out there even larger than your toy universe’.
Among the greatest educationists, from Friedrich Fröbel to Rudolf Steiner, Leo Tolstoy, Jean Piaget and Lev Vygotsky, the idea of toying around with the furnitures of the world has been advanced as one of the most important aspects of education, that is, the augmentation of autonomy (what I am and what I can do in the objective world). Philosophy of toys in a sense takes seriously the idea that education does not end with autonomy or with the initiation into the space of theoretical and practical cognitions. On the contrary, it sees the autonomy of the child, the child’s synthetic ways of manipulating and understanding things, its proto-theoretic attempts at constructing a world prior to even conceptualizing that world as the premises of education. For this educationist philosophy, the role of toys in the recognition of the child’s autonomy and world-structuring abilities are more than necessary. They are indispensable.
1. From Logos to Lego and Back
We can only represent the world to the extent that we have built a world in which our representations coherently hang together. The scope of our world-buildings demarcates the limitation of our attempts at representing the world. Take for instance, Carnap’s Versuch einer Metalogik (Attempt at Metalogic) or The Logical Syntax of Language after the failure of his logical empiricism program in Aufbau or more recently, the work of Uwe Petersen. Despite their methodological and objective differences, in both cases we see that the frontiers of objectivity or what we call the object or object-constitution (or alternatively, Being, the intelligible, etc.) are set by the scope of our attempts at the construction of what we call theory or more precisely, in the case of both Carnap and Petersen, by the logical structure (albeit in each case the question of the logical structure is formulated differently). This line of thought is, of course, guaranteed to elicit the ire of orthodox Kantians who may still believe in the hard distinction between form and content or opposing logic as a canon to logic as an organon which according to Kant is the logic of illusion or a sophistical art (CPR, B86) on the grounds that it is not constrained by the empirical sources of truth, sensible intuitions or information outside of logic.
What Kant means by logic as an organon is roughly a formal tool for the production of objective insights or an instruction for bringing about a certain cognition that can be said to be objective. This conception of logic is then characterized as the science of speculative understanding or the speculative use of reason—the organon of sciences (CPR, A16-18). On the other hand, logic as a canon refers again to the formal use of logic (regardless of it content which can be empirical or transcendental) but this time as restricted to the characterization of logic as the canon of judging (i.e. the mere criteria of the correct application of the laws of thought or judgements) which requires and is constrained by extra-logical information (CPR, A61).
At this point, I do not want to discuss in details the fact that Kant’s opposition of logic as a canon to logic as an organon is a historical take on the controversy between Epicurus (the defender of canon) and Aristotle (the defender of organon). Or the fact that Kant’s dismissal of logic as an organon is entirely based on an antiquated Aristotelian definition of logic. Regardless, of how we interpret logic, this very distinction becomes manifestly precarious in the wake of revolutions in formal and mathematical logic in the twentieth century. With the advent of computation as the proto-foundation of logic—thanks to the so-called Curry-Howard-Labmek correspondence—the last residues of the Kantian contrast between logic as a canon and an organon fade away.
Even though we will return to the above issue to examine it closely, for now, we can always counter the objection of orthodox Kantians with a brief retort: So you think that form without content is arbitrary (i.e. unconstrained), but could you tell me what is a content without form? Surely, entertaining the possibility of the latter even under the most watchful eyes is another variation of that ideological house of cards which is called the Given. The whole notion of logic as a canon describes a game of logic already rigged by the representational resources and limits of the apperceptive subject constituted within a particular transcendental type.
In contrast to Uwe Petersen’s rebuke against Kant in the second volume of Diagonal Method and Dialectical Logic, I think Kant’s distinction between canon and organon, logic as world-building and logic as constrained by world-representation is quite subtle. Yet subtlety is not by itself a criterion of truth or profundity. For Kant seems to naively assume that thinking about logic as an organon means believing that we can ‘judge of objects and to assert anything about them merely with logic without having drawn on antecedently well-founded information about them from outside of logic.’ (CPR, B85, my emphasis).
What is important to recognize in the above quote and other passages concerning logic in CPR is the constant repetition of such focusing adverbs as ‘merely’, ‘solely’, etc. Kant seems to be peddling a trivial and obvious point not only as a profound remark but also as a refutation of the conception of logic as an organon. Yes, at least since the time of Plato’s Sophist, we know that what is said is not equal to what is. And indeed, the equation of the two is the core tenet of sophism: As long as I know the rules of deductive syllogism I can call myself the master of all sciences. But logic as an organon neither implies the aforementioned equivocation—i.e. the claim that logic is by itself sufficient for judging about the stuff in the world—nor requires any metaphysical commitment with regard to logic—i.e. the claim that laws of thought are laws of the world.
In contrast to Kant’s straw-manning of organon, all the conception of logic as an organon suggests is that our resources of world-representation are in fact beholden to and caught up within the scope of our world-construction, and in this case, the world of logics. In other words, it would be absurd to even talk about objects without the primacy of logical structure or logoi. Kant would have agreed to this sentiment but only in a trite manner. Why? Because if the talk of object is meaningless without theory or logical structure, then the expansion of the field of logic or determinate thought-forms unconstrained by all concerns about representation would be an absolutely necessary step to constitute objects, make objective assertions and deepening our discourse about objectivity. This primarily unconstrained view of logic as the indelible factor for object-constitution is exactly what we can call logic as an organon. Without it, all we can ever achieve are pseudo-talks of stuff i.e. Aristotelian this-suches or tode ties, namely, unstructured encounters with items or stuff in the world which have no objective structure or invariant qualities.
Moving from the sense impression fuzzy mass of cubic redish (materiate individual substance or stuff) to this red Lego block (a perceptual taking or judgement) requires the addition of logical structure. But the constructive characterization of logical structure is not a priori limited by representational concerns. Indeed, to adequately hone out the notion of logical structure demands the treatment of logic and logical world-construction in terms of general logic in itself, that is to say, unconstrained by any enforced representational consideration (whether the experiential content, the empirical source of truth or the the criteria of correct application of logical laws to items of the real world) that may establish the frontiers of logic in advance.
It is only when we attempt to decouple logic from any representational or world-referring constraints that we can ensure a sufficiently enrichable framework of world-representation. In short, to expand the resources of representation and enhance the correct application of logical laws to empirical evidence or observational statements, we must first engage with logic in its own terms and expand its domain not in accordance with but in spite of representational constrains. The world-constructing resources of logic in itself precede and in fact undergird the world-representation, our understanding or judgements about the world. To make a Carnapian slogan, construction of the world is prior to the constitution of the object and the knowledge of it. This priority is not only priority1 in the sense of one temporally preceding the other, but also priority2 in the order of constitution. It is priority2 which is, properly speaking, the focus of logical world-construction and describes the conception of logic as an organon.
How can we constitute an object or even entertain the idea of objectivity in any coherent manner if we do not take seriously the world-construction of logic (i.e. the organon) so as to broaden the domain of logic within which the object coheres and the notion of objectivity is deepened? If we choose to abandon this path in favor of the Kantian conception of logic as a canon, then we are eternally sentenced to what I have called the Kantian straightjacket i.e. those particular transcendental structures we inhabit and by virtue of which we will never know whether our objective descriptions of the world are the overextetions of our specific (sensible) intuitive resources or not.
Taking the idea of logic as a world that ought to be infinitely constructed without any prior restriction is in every sense incommensurable with the idea of logic as something that ought to be coordinated with the real world in the first instance. Kant’s transcendental logic as a species of pure specialized—i.e. concerned with particular use of understanding—logic is precisely a conception of logic that is not just conservative with regard to the possible scope of logic (how general logic can be expanded and enriched) but also insofar as it is built on the conception of logic as a canon—i.e. constrained by representational concerns—it harbors epistemic implications which are nightmarish to the say the least.
With reference to the previous installment, this epistemological nightmare or hell is more than anything the consequence of our own self-imposed restrictions and not simply, the result of our local and contingent constitution as such and such subjects (history of evolution, the structure of memory, culture, etc.) When we limit logic to representational considerations while our representational systems are at the bottom rooted in a particular transcendental type which delimit our empirical observation and often distort our objective descriptions, then the epistemic or objective imports of our logical systems only reiterate or overextend the limitative terms of our representational biases. The picture of the objective world we provide resembles the portrait of Dorian Gray: only more sinisterly subtle variations of ourselves and our entrenched dogmas, and nothing more.
The only viable strategy for gradually escaping from this Dorian Grayesque ordeal is to take seriously the priority2 of world-construction over world-representation and to avoid subordinating the treatment of logic to any extra-logical concern that might have the faintest smell of representation, empirical point of reference, ordinary conception of meaning or anything that sets the boundaries of logic in advance.
The great escape only begins when logical construction is separated from the province of the apperceptive subject and its ordinary affairs. It is no exaggeration that the unbound realm of logical construction is analogous to an infinite ocean within which islands of subjectivity or apperception emerge and disappear. And in fact, this analogy is becoming more and more the very shape of the future logos: What I have called general artificial languages already offer us the inklings of that slippery yet boundless divine notion which we call logos and whose personifications are logic, mathematics and computation. Within this artificial apeiron—artificial language as the lego of future reason—our natural languages are but destined to corrode and eventually sink akin to tiny islands whose once firm ground can no longer withstand the rise of sea-level.
Made possible by the conception of logic as an organon, this apeiron is nothing but the universe or universes of metalogics or metalanguages. In other words, the logical apeiron does not signify one universal language or metalanguage, nor does it represent a final stage in the construction of language as the interface with reality or the configuring factor of the objective world. By contrast—and with a nod to Carnap in Logical Syntax of Language—no single language even in the most generic sense (i.e. not a natural language) can exhaust the logico-computational structure of language. For delving into the structure of language requires ever more richer languages or an infinite nested space of metalanguages or metalogics (a la Gödel).
It is in this respect that the plural designation general artificial languages simply describes what is already the case with logic and mathematics: No single system or language is adequate to explore the structure of language, logic or mathematics. Only an infinitely constructible—that is to say, not limited in advance in the vein of logic as a canon—nested space of metalanguages. Even the mighty Sellars treat the space of metalanguage as an applied domain and a completed totality by which we can resolutely talk about the structure of natural languages and such thing as semantic value and syntax. Contra Sellars, the space of metalanguage is not like an orthodox vantage point from which we can conclude the structure of meanings as assertions or finalize the definition of meaning as classification. Put differently, metalanguage as a nested constructible apeiron cannot even be compared to something like a completed vantage point for comparing two natural languages in order to decide the classificatory role of meaning (e.g., •red• in english means •rot• in german i.e. red plays the same classificatory role that rot plays in German). The whole point of metalanguage is the exploration of the very conditions of possibility of meaning as classification and structure, rather than as a light comparative study between this or that established language. Sadly, there are only a handful of philosophers and logicians for whom the domain of meta- is likened to an infinite heaven—an infinite nested space that is not arrested by the criteria of world-representation but unbound by the possibilities of world-construction in the domain of language and logic as organons.
The world-building of language just like logic mirrors the metaphor of ascending to metalinguistic heavens or descending into the abyss of metalogics depending on one’s theoretical proclivities and aesthetic sensibilities. In either case, what is crucial is the understanding that what is constructible by definition is never arrested by limits drawn in advance. Forgoing this notion of established boundaries for language and logic demands a concrete commitment to logic as an organon rather than a canon, to approach world-building as prior2 to world-representation.
2. Who wants some Fröbelian gifts, raise your hand!
With this digression on world-construction versus world-representation, toys versus adult concerns about the knowledge of the real world, let us return to the territory of actual toys.
The kindergarten philosopher, the child, is absorbed in that immense universe which is that of world-construction. The child’s toy universe does not resemble anything like the represented world of ours. The toy blocks by which the child assiduously constructs a world are not anything like bricks cemented over bricks. They can be replaced or even discarded if the child is not satisfied with the result. And we all know that a child never gives up. Until the toy-made world is in its optimal condition, it will be destroyed again and again. Even when the optimal construct is achieved, nothing guarantees that the child won’t reengineer it the next day to accommodate more adventurous narratives. This is the very definition of infant politics from which we adults have regrettably diverged.
The interaction of the child with toys is a premise upon which the objective interaction of concept-using adults and the world is built. The philosophy of toys in this respect precedes the philosophy of education qua augmentation of subjective autonomy i.e. what I can do and say in the objective world. What distinguishes toy-education from the canonical formula of graduation to adulthood—coinciding with full-blown conceptual competency—is that toys are not just about language but also the use of tools. They are as Vygotsky suggests the zones of synergy between language-use and tool-use, acquaintance with world-representing resources and world-making or world-reengineering techniques and systems.
A child who is immersed in playing with toys is a symbol-tool explorer. The child sees language as a tool just as it sees tools as symbolic-combinatorial elements of the language. The boundary between building the world and representing it is blurred and sometimes non-existent. It is in this sense that comparing the role of toys for a child to something like tools might be very well a symptom of our rooted adult misunderstanding. It is only, analogically speaking, from our adult perspective that we can remotely compare toys with tools. But toys, adequately understood, are not tools per se. In contrast to the use of tools, toys do not strictly adhere to pieces of practical reasoning (e.g., In order to achieve X, I ought to do Y). The ends of toy-plays are not like the ends of our practical reasons which go away once we attain them. They are more like inexhaustible ends, or ends which do not simply go away once achieved. For as long as, the world can be reconstructed and re-engineered, the infinite prospect of the kingdom of ends is at hand.
However, toys as I mentioned earlier are not just the blocks of world-construction. It is well-established (e.g., Spatial Reasonin in the Early Years) that toys play an essential role in the world-structuring capacities of children. The augmentation of spatial reasoning and correspondingly, the later use of mathematical concepts such as complex transformations, mapping, transfer, symmetry, inversion, shearing (in the case of paper-toys) and so on are directly linked to how children play with toys.
Block-oriented toys such as Lego introduce children to extremely potent and specialized concepts like modularity. It would be hard—if not impossible—to imagine the scope of technological progress and above that, the discipline of engineering as a tissue connecting the messy problems of physics with mathematics without the concept of modularity. Even though modularity is an epistemic concept rather an ontic one, I am yet to be convinced by the arguments of those philosophers who think certain structures in the universe, for example brain, cannot be modeled modularly. The majority of such critiques assume that modularity means something similar to Khrushchyovka-like buildings where modules are put atop one another to create something that is too rigid and monolithic to afford anything resembling organic life. But modularity comes in varieties of forms. Every complex can in one way or another—after sufficient approximations—be modeled via the surprising efficacy and flexibility of modular systems. It is true that modular models distort some information concerning target systems, but then which models are exactly distortion-free? The modeling-epistemic war against modularity is more than anything a product of unwarranted metaphysical assumptions about the structure of the universe and the inadequate grasp of the concept of modularity than anything else. See for example the recent works of Andrée Ehresmann, Jaime Gómez-Ramirez and Michael Healy who have synthesized the insights garnered from category theory and algebraic geometry with those from brain sciences.
Speaking about the territory of toys as tools for the enhancement of world-structuring abilities or faculties would be impossible without at least a brief reference to Fröbel’s original notion of kindergarten and Steiner’s school of Waldorf education. Think of Fröbel’s gifts, multiple sets of toys starting with regular wooden blocks. Once a child masters the use of a toy-set or a Fröbel gift, it is awarded a new set of toys, for example, geometrically disparate wooden blocks plus colored strings. The point of the toy gift-giving is to raise a child with the understanding that world-construction is an infinite domain. The further you attempt, the more world-building components fall from the sky. If you think you have achieved a perfect world, then let me give you a new set of constructive units, namely, toys.
The same thing holds for Steiner’s school of Waldorf education. Think of a Waldorf doll, a faceless uncharacteristic doll made of the cheapest material and stuffed with hay. The child begins to learn that the doll is not the true object, which is to say, the object is always incomplete. The child then goes on to paste a smily face, a big nose and dark brown eyes which it had drawn on a paper on the doll’s face. However, depending on the setup of the child’s toy universe, the doll can take fundamentally new characteristics like a platform on which new qualitative differences can be built, layer after layer.
From our adult perspective, there is hardly anything more grotesque than a Waldorf doll laying around, one with which a child has played for months. But as if beauty in the eye of a child could be anything other than something in essence synthetic, layered and transitory: faces pasted on faces, vestige upon vestige, characters built on top of one another all within one domain, the toy universe. Within this universe which is a featureless doll, the original beginning and the ideal end are never attainable, only residuation of what has come before and the possibilities of further construction.
Similar to Waldorf dolls, the main emphasis of Fröbel gifts is also world-building. Although the gifts shift their focus to a more systematically disciplined mode of construction and less the free-play of simulating or imaginative capacities. Climbing the hierarchy of Fröbel gifts demands new forms of spatial reasoning, geometrical pattern matching and bootstrapping techniques, and sophisticated intuitions into the realm of naive physics. As the level of the gift goes up, the child’s construct becomes increasingly similar to a genuine marvel of modernist architecture or an experimentally engineered contraption straight out of the Rube Goldberg-inspired videogame, Crazy Machines. Nevertheless, in both cases, we see a coupling between world-building and simulational abilities. The point of toys is as much about world-construction as it is about building up the capacities and techniques of simulation through which understanding is amplified and its scope is expanded.
We already know from Kant that what we call today simulation in a loose sense—as in simulating a world in which friction does not exist—is at the bottom the function of productive imagination which in the Kantian parlance is just understanding in a different guise. Behind productive imagination lies one of the key themes of Kantian transcendental method: the argument about schemata. Schematism addresses one of the most weighty problems in transcendental philosophy, the so-called homogeneity problem i.e. the correspondence or coordination between concepts qua rules and objects or imagistic impression of items in the world. By image, Kant means a singular rudimentary (i.e. intuited) representation of an item / object. One can think of the rudimentary imaging faculty as involving extraction and integration of salient perspectival or local features qua variations of an object. Concepts on the other hand are non-perspectival (the invariant). They are at the most basic level principles of unity through which multiple particular instances can be brought predicatively under one subject, particularly a logical subject. Once we have concepts, we can arrive at critical perceptual judgements so that when we look at a Bic pen immersed in a glass of water, we can assert that this such-and-such pen looks—perspectivally—bent but is in fact straight. This is a piece of critical perceptual judgement or taking i.e. grasping, understanding or conceiving (bringing into conception). In contemporary terms, then we can think of the homogeneity problem as the problem of coordinating local variations and global invariance (the core of sheaf logic) or particularities and universality, eikones and ideai, the temporal capacity aisthesis and the time-insensitive faculty nous (the problem of Plato’s divided line). Let me clarify the homogeneity problem with an analogy to Euclid’s Elements.
Think of this particular equilateral triangle, this particular isosceles triangle and so on. These triangles are just particular—i.e. locally-varied—shapes or image-models. In reference to Proclus’s commentary on Euclid, they are only triangles by virtue of falling under the concept of triangularity as such. The triangle as a concept then allows us to make certain kinds of judgements or to draw diagrammatic inferences (Euclidean demonstrations) with regard to any triangle in whatever possible configuration.
Now the homogeneity problem engages with the issue of how can this or that particular triangle can be coordinated with triangularity as such. Put differently, how can the concept be supplied with its image. Proclus thinks the solution is in what he calls a mediating universal, a rule that comes between the detached universal (the universal triangularity inexhaustible by any image of a triangle) and particular triangles. Kant calls these mediating rules, schemata. Schematism then describes rules or constructive procedures which unlike the strong sense of the concept are not concerned with what particular image is subsumed by a concept, but how a particular image can be constructed in thus-and-so ways so that it conforms with a certain concept. This howness designates the functional role of the concept qua rule. Functional in the sense that we use the concept of triangularity whenever a particular item—an imagistic shape of a triangle—is implicated in the actual use of the concept of triangularity.
A schema is then simply the representation of a general procedure or rule of imagination (i.e. the capacity to represent an object even without its presence in intuition) for providing a concept with its image. But what kind of image and what kind of rule? The answer is a perspectively determinate sensible image which connects the single concept and the varied images. In Euclidean terms, we can think of a schema as triangularity not as a detached universal eidos, but as a formula for how to configure and diagram such-and-such lines, angles and vertices. For example, according to Proclus, just as there are mediating universals or rules which supply the concept of triangularity with triangles which might be equilateral, isosceles, etc, there are also construction rules or recipes of configuration (mediating particulars) which enable us to construct triangles using lines and angles as their elementary blocks for particular types of triangles (the concepts of scalene, isosceles, and so on).
Essentially, the schema as the missing link between understanding and sensibility, has one foot in categories (pure classificatory concepts) and the other in the intuited object or the image qua appearance. Thus we can say that a sensory presentation and a singular conceptual representation are determined by one and the same schema, that is to say, one and the same concept of a determinate mode of sensory presentation through which the concept and the object or its imagistic presentation come together. A schema is in this sense not a rule as that which predicates but as a recipe for construction of locally varied images for a single concept.
When it comes to schematism, Kant is actually quite fond of non-empirical examples like drawing a straight line in thought. A recipe or rule for drawing a straight line is at once responsive to two criteria: (1) every segment that is built is drawn in relation to the concept of line as a whole or that which binds all segments together, and (2) every piece or segment is constructed on the segment that has come before it (the law of memory which in Kant’s work can be attributed—with some reservations—to inner sense). With regard to both #1 and #2, we can say that the ultimate coordination between the image and the concept happens in the realm of rules (of construction) as pertaining to how such and such phenomenal features are being organized or brought together by space and time as transcendental idealities.
Therefore, as long as we are endowed with different spatio-temporal principles of organization, we can imagine of schemata which bind the imagistic item and the concept in different ways. Lucky for us, a child who is still in the process of coming to grasp with representations of time and space can vary the very parameters by which the image is related to the concept. This is what I call infantile schematism and by that I mean a child never settles for a particular established image for a concept. This is but the very law of simulation.
You say that the concept of mountain should conform to such putative invariant image-models. ‘Daddy I do love you but you also happen to be so parochial,’ the child opines: ‘Let me set you straight, in my toy universe, the mountain can be anything. It can be a cardboard box covered with brown satin or it can be a pot of old coffee’. The child then continues, ‘you think just because I play with what I call humans, they should conform to what you perceive as a human. But you are sadly mistaken for even a colored pencil wearing a thimble can be a human, an autonomous agent in my world.’ This is what simulation is all about. It does not matter what imagistic impression of an item in the world corresponds to a concept. What matters is the mediating rule of how any imagistic impression—after the sufficient relaxation of representational constraints—can be coordinated with a concept which is applied across the board for all instances brought under it.
Accordingly, simulation in the aforementioned sense involves destabilization of a canonical or stable set of images for a concept. But this process of destabilization is followed by a process of restabilization so that the implications of the use of a concept hold for any image that falls under it. The simimulational role of toys is exactly like this. The schematic coordination of the image and the concept is there, however, (1) its representational function is partially suspended, (2) the stabilized homogeneity between object and concept is frequently destabilized in favor of new modes of construction and correspondingly, object-constitution, (3) the relaxation of representational constraints amplifies constructivity so much so that we can replace a canonical set of image-models with such-and-such properties with an entirely new set that has different qualities (e.g., substituting humanoid doll-like entities with tiny calculators while abiding by the rules of how these calculators operate).
With regard to #3, simulation can be said to be essentially a species of what Kant calls as-if arguments (als ob). Such arguments are regulative judgements which can be both theoretical and practical such as as if there are categorical imperatives with regard to the kingdom of ends. In Kant’s philosophy, we should always be vigilant not to mistake a regulative judgement (an analogical as-if) for a constitutive judgement. In other words, we can never overextend our analogies. But in contrast to Kantian as-if arguments which are purely analogical and thus under the constraints of analogy, the simulational as-ifs are not exactly analogical meaning that they do not need to be always compared with or a priori limited by constitutive judgements. Simply put, we can take an analogy—a simulation to be more precise—seriously, treating it completely in its own terms. There is no danger of overextending an analogy in a simulated world so long as we are consist in our treatment of the simulated components and are true to the simulation and its logic (e.g., . Within a toy universe—i.e. a simulated framework—the primary emphasis should in fact be given to the simulation and not how the simulated is related to the source or premise of analogy). If we are to enrich a simulated world we ought to, first and foremost, attend to the simulated framework rather rather than the source of the analogy i.e. the real world. Only the unreserved enrichment of the former can assure the enrichment of our conception of the latter.
With this rather hasty discussion, let us in a crude manner define toys:
Toys are a sub-class of object-models whose primary task is world-building. This task is enabled by how toys suspend the canonical stability between the image and the concept, the correlation between representation and construction as an autonomous domain.
3. The Philosopher King of All Toys and Engineers
The idea of toy as an object-model capable of simulating a world or a problem without strictly conforming to the representational constraints of that world or variables and parameters of the original problem has a long history in science and specifically engineering. Confronted with a problem in one domain, the engineer constructs a toy surrogate or mechanical analog of that problem in another domain. The engineer then goes on to investigate this toy construct and how it behaves in its specific domain and in its own terms. Using certain equivalence principles that can coordinate the original problem and the toy surrogate, the engineer is then able to use the solution provided by the machine and translate it into a solution for the original problem.
It is as if in order to adequately understand a problem in a specific domain and to arrive at a solution to it, one must first attempt to exteriorize this attempt at understanding by reinventing the problem in an entirely new domain. But what would be the characteristics of this new domain? First, this new domain should allow us to examine the original problem under new parameters which are not exactly the parameters of its native domain. Next, the new domain should be far more manipulable than the old one. The principle behind this genius epistemic hack is attributed to the greatest of all engineers, Archimedes of Syracuse. It is well-known that Archimedes had a recipe to crack the most difficult geometrical problems of his time through a method of mechanical reasoning: inventing a toy machine and observe how the machine deals with your problem.
Imagine there is a geometrical problem which cannot be solved given the resources you have—or at least, it will be very difficult. Also imagine you are a cunning devil-engineer who doesn’t simply give up. You instead construct a mechanical device that can stand as the analog of your geometrical problem. The construction of such a machine would require particular and general forms of equivalence-establishment between the geometrical problem and the machine (the mechanical domain). The particular equivalence criteria consist of a set of geometrical inferences which can translate and transpose your geometrical problem into the construction and the behavior of the machine, plus an available mechanical or physical principle which can coordinate the parameters of the machine with those of the geometrical problem. The general criterion of equivalence, on the other hand, is usually a combination of simplification and/or idealization, in the sense that the machine—its scope, assignment, behavior—should be sufficiently idealized/simplified for it to be an optimal analogue of the problem at hand.
Once you make the mechanical analog, you want to pay attention to how the machine works and not the geometrical problem. When you have something similar to an output or the machine-solution, then you again use the particular criteria of equivalence so as to translate the solution supplied by the machine into a solution to your original geometrical problem. See the following diagram for a better grasp of this method (Note: the ASCII diagram might display incorrectly if you are using a feed-reader.)
For example, imagine the problem of’how much bigger is a cylinder than a sphere fully enclosed in it?’ To answer this question, we can use a lever with adjustable arms and a solid sphere and cylinder made of the same uniform material. All we need to do is to put the sphere and the cylinder on the extremes of the lever and adjust it so the two are balanced. The answer if you are willing to do this experiment is Vcyl : Vsph = 3 : 2.
A more contemporary and intricate example would be one suggested by David Hilbert in Geometry and the Imagination (p. 222). This is the problem of finding geodesics of a surface. To construct a mechanical analog, we need to have Gauss-Hertz principle of least constraint—or more precisely, Hertz principle of least curvature—as the particular criterion of the equivalence between the mechanical and the geometrical. This is our first problem before going on to build a mechanical or toy surrogate: finding the equivalence (not equality) relations.
As soon as we have the equivalence relation—Hertz principle—we can talk about and examine the constrained dynamics problem as if it was the problem of finding geodesics of a surface. The Hertz principle gives us a way of coordinating geodesics equations and equations of motion. Thus when we arrive at a solution to the constrained dynamics problem we can can translate it into a solution for finding geodesics of a surface.
In order to understand how the analog of the problem of finding geodesics of a surface works, first we begin with differential equations for a 3-dimensional surface parametrized as follows: . These differential equations can be expressed in a compact form as:
Now, for constructing the mechanical analog of this problem, we can imagine a toy universe consisting of only two blocks, a toy-apple and a toy-ant. They are toyish because it does not matter whether the apple is red or green, or whether the ant is silicon-based or carbon-based. All we are interested in are those features which allow us to observe the locomotion-behavior of the ant as it it traverses the surface of the apple. In fact the walk of the toy-ant on the surface of the toy-apple represents a difficult problem of robotics: How can the robot-ant find the shortest paths on a generalized surface, whether on the surface of Mars or on a Riemann surface?
The apple-walking of the ant represents (holonomically constrained, i.e. ) force-free geodesics on a constrained motion surface which conforms to the following equation after constraint stabilization:
Source: Misner, Thorne, Wheeler, Gravitation (Princeton University Press).
Despite its peculiar artistry, the Archimedean method or hack is quite simple actually. But the scope of this method should not be restricted to the world of geometrical and mechanical problems. It an be applied to any problem after finding a particular equivalence relation that translates the constraints of the domain X (the milieu of the original problem) to the constraints and parameters of another domain. The legend of Archimedes in the tub captures the power of toy analogues quite spectacularly: Archimedes is in the tub, playing with water like a child, making waves after waves, watching how the water spills out of the tub. Then suddenly he pretends that his body is a gourd of water and then he continues with this experiment as if it was reality. In any case, we can think of the Archimedean method as a generalized way of playing with toys qua object-models. The following diagram should shed some light on the general logic of the Archimedean method:
So on one side, we have the problem X. Our aim is to understand how actually this problem can be brought into a resolution. On the other side, which is that of the toy surrogate, we are free from such a mode of understanding and explanation which we can be called how-actually. We are interested in the possibilities of how the analog or the toy surrogate can explain the problem X. In a nutshell, the toy analog is both the domain of possibilities which where absent in X and the domain of bracketing or winnowing through such possibilities (the space of n-hypothesis). This is what for now can be dubbed the domain of how-possibly explanation. I will define how-actually and how-possibily explanations and their corresponding modes of understanding when I discuss toy models, but until then let us go along with these rudimentary definitions.
Once we solve the problem of establishing equivalence relations (Eq. ?), we can enter the domain of the toy analog, examine and observe how the reinvented problem comes to a resolution. Essentially, in the analog domain, the solution is not analytic in any sense. The solution can only be achieved through bracketing or limiting the space of possible hypotheses and their corresponding explanations. And that’s exactly what the Archimedean method does: it enables the how-possibly explanation brought about in the analog domain to become a how-actually understanding for the original problem in the domain X. For once we arrive at a true enough how-possibly explanation in the toy domain—namely, a candidate hypothesis and its corresponding explanation—we can say that this is also a how-actually enough (i.e. true) explanation or resolution for the problem in the domain X.
This post has gone beyond the limits I originally imagined, so let’s stop at this point and wait for the next installment in this series. [Update note: For those of you who are interested, I have added some equations for the Hertz-principle example mentioned in the last section.]
While the next posts are brewing, I thought to make a list for a few friends who have asked for some reference materials on new trends in complexity sciences (scales, anticipatory systems, the question of formalizing adaptive systems, modeling and new explanatory paradigms).
On a different note: My friend Adam Berg will also start posting on toys, world-building and philosophy of complexity here on this blog. I can go on rambling about Adam’s philosophy for pages. Suffice to say, he is among one of those handful of philosophers who do not admit—in theory or practice—a distinction between analytic and continental camps. His commitment is only to one thing, philosophical exploration in the broadest sense. His masterwork Phenomenalism, Phenomenology, and the Question of Time: A Comparative Study of the Theories of Mach, Husserl, and Boltzmann is more than enough evidence to support this claim.
Complexity: Hierarchical Structures and Scaling in Physics – Remo Badii, Antonio Politi [A classic and technical work in complexity sciences which started a whole genre of inquiry about hierarchization, scales and constraints on modeling.]
Lyapunov Exponents: A Tool to Explore Complex Dynamics – Arkady Pikovsky, Antonio Politi [Another technical work on one of the key concepts in the study of complex and dynamic systems. I agree with Robert Bishop, without an adequate grasp of Lyapunov exponents, it is so easy to fall in the trap of complexity and chaos folklore.]
An Introduction to Kolmogorov Complexity and Its Applications – Paul Vitányi, Ming Li [Yet another technical title but mandatory for understanding algorithmic complexity and the later works on structural complexity / stability by the likes of Crutchfield and Ladyman]
Simulation and Similarity – Michael Weisberg
Complexity: metaphors, models, and reality – George Cowan, et al. [Occasionally a bit dated but it contains some interesting conversations between Santa Fe people.]
Re-Engineering Philosophy for Limited Beings: Piecewise Approximations to Reality
– William Wimsatt [An expanded and revised collection of Wimsatt’s papers on functionalism, the gradualist approach to modeling and mechanisms]
Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research – William Bechtel [This or Bechtel’s other work Mental Mechanisms. For a good brief critical response to Bechtel-Wimsatt’s paradigm of mechanistic explanation, see Jay Rosenberg’s Comments on Bechtel, ‘Levels of description and explanation in cognitive science’)
In Search of Mechanisms – Carl Craver
No Revolution Necessary – Carl Craver
Levels – Carl Craver [This and Batterman’s essay on scaling and midlevel explanation are quite crucial for not only understanding the problem of descriptive-explanatory levels but also thinking about the potential uses of such paradigms in something like information ontologies or semantic web. See for example, WonderWeb Deliverable.]
The Devil in the Details: Asymptotic Reasoning in Explanation – Robert Batterman
The Tyranny of Scales – Robert Batterman
Physics Avoidance – Mark Wilson [Highly recommended. Wilson’s long awaited book and a sequel to Wandering Significance came out this month.]
Philosophy of Complex Systems – (ed.) Cliff Hooker [Robert Bishop who has an essay in this collection offers a particularly astute crtique of some of the folklores in complexity sciences.]
Anticipatory Systems – Robert Rosen
Theoretical Biology and Complexity – Robert Rosen
Memory Evolutive Systems – Andrée Ehresmann
Simple, Complex, Super-complex Systems – Ion Baianu
What is a complex system? – James Ladyman, et al.
The Calculi of Emergence – James Crutchfield
Modularity in Development and Evolution – (eds.) Gerhard Schlosser, Günter Wagner
Towards a Theory of Development – (eds.) Alessandro Minelli, Thomas Pradeu
Functions – Philippe Huneman [Probably one of the best collections on the new wave of functionalism informed by complexity sciences.]
Developing Scaffolds in Evolution, Culture, and Cognition – (eds.) Wimsatt, et al.
As I mentioned previously, the posts on this blog will be more like compilations of overextended post-it notes than carefully crafted papers. It is like lifting up the hood and letting other people look at the fragile components of a prototype engine you have designed and how such components tentatively hang together in spite of the fact you have just noticed the engine not only works at a sub-optimal level but also someone else has made a better one.
Much like engineers who are always obsessed with how crazy contraptions work, philosophers are curious about how people think particularly those whose activities in one way or another involve modeling the world, making systems and concepts about it. Having come from an engineering background and being a philosopher, I am doubly fascinated with how other philosophers or theorists (whether a scientist, an artist, a political theorist, etc.) model their worlds, conduct their research and practice so as to arrive at a specific set of theoretical, practical or aesthetic claims. This fascination—call it a chronic cognitive voyeurism—is centered on the implicit know-hows and know-thats behind theorists’ works and visions as well as structural flaws in their thoughts and Weltbilder. The fault-finding tic comes from not just the philosophical penchant for the criteria of trueness but also from the engineering perspective of whether an argument, a system of thoughts or a world-picture can be examined, optimized or revised, or should it be discarded altogether. If it can be revised, then what does it take to revise it? But a comprehensive revision requires accessing certain information about the structure (roughly speaking, how things hang together in the broadest possible sense) of the said system or world-picture. Now it is becoming obvious that the question of revision or optimization is not easy after all. For how can one at the same time inhabit a system or a model and examine or more precisely, make well-formed sentences about its structure so as to be able to consequently revise it. This is a Gödelian puzzle and the question from which Rudolf Carnap begins his work post-Aufbau (much on this issue in toy philosophy universes series and future posts on Carnap’s revolutionary work, until then I draw your attention to the recent brilliant work of Steve Awodey on Carnap).
Let me reformulate the above puzzle with regard to something like language. If our natural languages are at their base anchored in our intuitions in a Kantian sense and furthermore, if our intuitions (say of space and time) are bound to a certain type of transcendental structures and again if these transcendental structures (e.g., memory, perception of space, natural language, locomotory mechanisms, etc.) are the products of our own local and contingent constitution or evolution as subjects, then how can we know that our objective descriptions of phenomena in the world are not anything other than the overextension of our local and contingent characteristics? How can we ever step outside of the limitations of our natural language or the egocentric framework? Ergo, Wittgenstein’s problem: the limits of our world are the limits of our language and insofar as our natural languages or for that matter any representational system are shackled by—just like Kant’s intuition—a particular transcendental type, then our objective descriptions of reality are doomed forever to simply reiterate a variation of our local and contingently characteristics of experience and nothing more. Kantian philosophers usually have the habit of understating the consequences of this epistemic hell but just because they downplay its significance, it does not mean that this hell does not exist.
This problem, of course, is nothing new. Ludwig Boltzmann offers a shining example of it toward the end of Lectures on Gas Theory (§89-90) with regard to our intuition of time and the possibility of arriving at new facts of experience (a detailed discussion on this problem is in the forthcoming Intelligence and Spirit).
Even Kant himself in the Critique of Pure Reason (CPR), inadvertently, poses a similar question. In the section concerning general remarks on the transcendental aesthetic, Kant says something along the line that there might be other beings in the universe—i.e. species inhabiting other transcendental types—whose representations of space and time as forms of appearances differ from ours (A42/B60), but it would be an exercise in dogmatic metaphysics to speculate how such different intelligences intuit the world in order to see or conceive it as something dissimilar or even incompatible to our conception of the world. In other words, “[we should solely be concerned with] our way of perceiving them [i.e. things-in-themselves], which is peculiar to us, and which therefore does not necessarily pertain to every being, though to be sure it pertains to every human being.” (emphases are mine) Kant then, in later chapters, goes on to say that any conception of transcendental logic should always respond to the fact of our experience in order for it to be objective. One can give two different readings of these passages in CPR. One would be a charitable reading and the other an excoriating one. According to the charitable reading, of course Kant is right. We can not immediately step outside of our own forms of intuition and transcendental structures to talk about other beings and their models and conceptions of the world, universe or reality in its objectivity. Any model of other intelligences we provide is going to be modeled on our own theoretical and practical reasons. Any empirical description of such intelligences or beings will bear the characteristics of our own transcendental structures to the extent that the scope of empirical descriptions are dictated by the scope of transcendental structures. In talking about other intelligences or minds, we are always forced to explain and justify the reason we interpret such beings as intelligent or minded. That is to say, without an explicitly stated criterion of what we mean by intelligence or mind, our characterizations of other beings as intelligent or minded are nonsensical. We might just as well talk about angels dancing on the head of a pin, cyber-popeyes, ineffable superintelligences and lava lamps being in a noumenal harmony with other kinds of stuff. My friends Ray Brassier and Pete Wolfendale have done more than a sufficient job to expose these pseudo-posthumanist and object oriented gibberish as the latest exercises in speculative farce.
Now the non-charitable reading which is the one I am interested in, but without dismissing or rejecting the charitable reading that I just outlined: Even if we accept Kant’s remarks on other beings qua transcendental types as a reasonable cautionary tale that we should all take by heart, there is still something missing in this story. For Kant at the same attempts to underline the particularity of our transcendental structures (forms of appearances in particular) and emphasize the universality of our objective descriptions of reality. It could be non-controversial, if Kant’s definition of universality simply meant ‘as a matter of a necessary rule’ but we know that what Kant in many occasions in CPR means by universality is universal with capital U i.e. our description of universe or reality is Universal in a stronger sense. That is to say, it is the objective description. But in all due respect to the transcendental luminary of Königsberg, how is this even possible? How can we at the same time endorse the particularity of our transcendental type and claim to have Universal objective descriptions of the world of which we are part? Herr Kant, surely you don’t expect us to simultaneously believe in the possibility of different transcendental types and also rule them out in favor of our own merely given transcendental type on the ground that talking about such possibilities is an exercise in armchair speculation? Because if that is what you are saying then you are nothing other than a benighted humanist, an errand boy of the given. You claim that our objective descriptions are universal only to the extent that you do not know the exact limitations of our transcendental particularity because you think that our specific form of intuition should be the sole concern of our philosophical inquiry. In other words, you have already gerrymandered the boundaries of our particular subjectivity, turning them from locally bounded to universally unbounded.
In this framework—the Kantian straightjacket—how can we ever know that our empirical descriptions of the universe in its objectivity are not simply the overextension of the particular given characteristics of our own transcendental structures which can in fact be local and contingent? But even more insidiously, even when we sufficiently differentiate local and contingent characteristics of our forms of intuition from the objective descriptions of phenomena in the universe, what does guarantee that the bracketed characteristics of our intuition don’t encroach upon and distort our rational scientific models of the world? This is the very problem that vexed Boltzmann’s later work in the context of bridging statistical entropy and thermal entropy, micro- and macro- scales. To paraphrase Boltzmann’s words: If renewing our relation with objective reality requires renewing the structure of our experience then how can we at the same time exclusively inhabit a particular structure of experience and arrive at new facts of experience? Simply put, how can we be in X-space and see the world beyond it? Certainly achieving the latter requires us to adopt a new transcendental type, one that is at once the subject of speculation and in contiguity with our existing transcendental resources (i.e. intelligible). To sum up, our model or conception of the world ought to be both in contiguity with our given resources and also beyond the limitations imposed by our particular transcendental structure. As we now see, this problem is not only serious on a theoretical ground but also quite difficult to tackle with on methodological grounds. However, this is not exactly the topic I would like to focus on, even though it can be seen as a primary motivation behind what I shall explore in the future posts.
The subject that I would like to write about is the metatheory of the practice of philosophy or if you prefer—with some caveats that I will hopefully elaborate in the future—the metatheory of theorization. Essentially, I would like to talk about the labor of modeling, but also toy models and eventually toy philosophical universes as systematic models in which the aforementioned Kantian problems don’t simply go away but nevertheless are mitigated. So with that said this post is the first installment in a long series where I will talk about modeling, about the relations between models and cognition, and more importantly, about the kind of models that might rescue us from the current quagmire of philosophy and the idleness of thought. Hopefully, the connections will become clearer as we move forward.
While I’m writing this rather overextended post on modeling, complexity and generalized pedagogy:
Perhaps it is because of my barbaric background and all—being a middle eastern to the core—but I really cannot figure out what this fuss about Jordan Peterson is. From what I have seen so far, the majority of criticisms are coming from people who are fine with greedy naturalism but unwilling to embrace its consequences. It’s like being a follower of David Icke or being a fan of Land and then getting surprised by the fact that your world view at the end of the day degenerates into lizard epics and adolescent intergalactic skynet battles fought between white males with zero scientific literacy and dressed in elf-uniforms fighting the good larping fight for a messainic capitalism.
This has always been the bane of greedy naturalization: Once you arrive at an inadequate concept of nature that does not allow you to move upward toward the autonomy of reason and downward toward the heteronomy of causes, the sapience and the lupus to use Plautus’s phrase, then such scenarios are inevitable. Naturalization was always supposed to be a two-way street moving toward autonomy and heteronomy. However, these days with the advent of neoliberal science you only get a dead end. Any paradigm of naturalization that endorses a one-way movement suffers from an inadequate concept of nature.
My friend Tahir Al-Tersa came up with a great comment in response to my initial reaction that should be quoted in its entirety:
At first I didn’t understand why Peterson was becoming so relevant either, my thoughts were the same – particularly because not only was what he was saying the logical conclusion of already very common presumptions, but because it wasn’t even a particularly unique conclusion. But it’s clear Peterson isn’t popular solely because of the intellectual contents of his position – the same goes for individuals like Milo, etc. – it is principally because he was able to align them with a real political/cultural position many already have. By proving the ability to argue these ideas on popular mediums such as television, and make sense of worldly controversies with them in a way that penetrates popular consciousness, they take on a power that is in excess of their actual contents. And in my view, it would be mistaken to underestimate the extent to which those of us immersed in the intellectual sphere are all in a way beholden to this ‘lower’, more vulgar one.
I agree with Tahir wholeheartedly. It seems to me that the image of the public intellectual today has become almost synonymous with this brand of greedy naturalism (sappy human conservatism and mysticism peddled under anti-humanist evolutionary fables and bravados). To use Lucca Fraser’s term, greedy naturalism coincides with supernatural mysticism. Once you develop a sloppy unscientific conception of nature, you almost invariably become liable to endorse a supernatural thesis about the world of which we are part. In other words, Peterson’s greedy naturalism and his Jungian supernaturalism are actually the two faces of the same coin. If there is a viable image of the leftist public intellectual that ought to be endorsed in opposition to the right-leaning public intellectual is that of an intellectual who takes nature as anything but given, that is to say, nature not as a fixed entity or something god-given but as a manipulable or constructible explanation in the space of n-hypothesis.
This is to say that the leftist public intellectual today should be the very child of scientific enlightenment, at once observant of the negative socio-cultural baggage of the enlightenment project and devoted to its core commitments i.e. science and rationality, broadly understood.
A friend of mine, Alice Sinclair, cautioned me about the category of rationalism: Today’s liberalism is rationalist, and people like Peterson are simply the inevitable consequences of the liberal brand of rationalism where reason is treated as a ‘teleo-ideological force legitimizing the status quo’. This is true, but this is another reason why the left should reclaim the collective conception of rationality. The unbinding of the collective conception of reason is in fact tantamount to the negation of the liberal recognition of reason as an Aristotelian telos that safeguards the legitimacy of the established order of things. How can it be reason if it does not recognize its own limitations here and now? So in a sense, when I say reason is necessary but not sufficient, I mean that true rationalism should coincide with communism as the real movement that abolishes the so-called completed totalities of history and overcomes the status quo. This includes the parochial conception of reason that protects the liberal polity. I think one of the tasks of leftism today is precisely to wrest rationalism from the liberal conception of reason as a teleological force, to demonstrate concretely that the project of reason is the negation of such ideologies erected in favor of the status quo and whoever or whatever that represents it.
To this extent, I don’t see a viable alternative other than scientific rationality. To those leftist comrades who fear even the remote mention of the words reason, science or computation: The trash bin of history is waiting for you!
To be a leftist means to endorse history as science (Marx), to take the idea of critical and rationalist science seriously. Ultimately, I believe Tahir is right. Nothing is going to change in the public arena unless the left puts forward its own public intellectuals who are not afraid of science (broadly understood), but fully submerged in it. To use McKenzie Wark’s term, what we need is vulgar marxism (not to be confused with kitsch marxism that I have criticized in the past)—that is, a popular marxism i.e. vulgar in the positive sense of the people. We need vulgar leftists who can once more bridge the gap between science and egalitarian ideals, who can demonstrate that the ideas of Peterson and his ilk are not just ethically problematic but above all, are patently false on scientific and methodological grounds. Short of that, we are in every respect doomed.
I have decided to finally resume blogging. This time, however, I plan to focus on various threads—some still loose and some already converged—of my philosophical research. I believe that ideas should be handled impersonally particularly in science and philosophy. For this reason, I am not convinced about keeping the components of an ongoing research secret. If people can build on your ideas even when your ideas are still in their larval stage, then it does not matter whether they reference you or not. As long as ideas and concepts can be enhanced, refined and propagated, plagiarism is a virtue rather than a vice. The task of a philosopher is to highlight the hard fact that the concept is that over which no single human has a final grip. Therefore, the whole obsession with working in secret, keeping things in the closet until the book is published is absurd. To take the concept of open-source seriously, one must first take the idea of an open-source self seriously. In this sense, we are far from the Wulfian ideal of a global collabratory even though the internet has effectively knocked down some of the walls. So to this end, the blogging medium gives me the right amount of control in conducting my research and openning it to people who pick up ideas as tools so as to make better tools that can be put in the service of thinking in general.
So what will be the future posts on this blog? Currently, I am planning to allocate a major part of it to the systematic philosophy of mind, that is, a family of fundamental correlations: intelligence and the intelligable, structure and being, theory and object, language and the world. In this sense, what I mean by the systematic philosophy of mind is in truth systematic philosophy (the organon of theory) itself, formulated in different forms from Plato and Confucius to Descartes, Hume, Kant and Hegel, and more recently Lorenz Puntel and Uwe Peterson. Within this framework, I would like to also write about philosophy of science (particularly my heros Wolfgang Stegmüller and Adolf Grünbaum), logic and computation, and Euclid’s Elements. The latter is more a personal interest of mine than a topic explicitly fitting the aforementioned framework. However, I believe Elements is the first work that attempts to integrate formal thinking and systematic thinking and in doing so, openning new pathways to the questions of structure and theory. Having taught Elements in a number of courses, I always tell my students that they should engage with Elements not only as a mathematical treatise but also as a philosophical thriller, an exercise in making worlds and concepts using a handful of naive intuitive axioms or data. In this respect, the plan is to build on some of the best commentaries on Elements as a kind of toy philosophy universe (much on this topic in the next post). My immediate references are Proclus’s commentary as well as the seminal essays by Kenneth Manders and Danielle Macbeth.
In addition, there will be some posts on the ascesis of autodidacticism particularly for those who are bent to become philosophers and survive in a paraacademic world where the finances are always close to zero, standards are clouded by hatred of academia and rigor is still a taboo word yet nevertheless ideas do not reek of the stale dungeons of academia. As for the form and the style, well, the posts will oscillate between formal and informal, essay-form and rambling, preaching and scolding: In short, this blog’s mission is the comprehensive corruption of the youth.