AshvinP wrote: ↑Fri Nov 08, 2024 7:28 pm
Clearly Levin would say that having a better bodily instrument will also modulate our psychic curvatures, i.e. make us more content, peaceful, satisfied, etc.
Of course he would say that. Yes. But does that make any difference at all on the fact that these effects on the psychic symptoms of discontent, restlessness, and dissatisfaction are obtained from without, whilst the real freedom of modulation of psychic curvatures can only be attained when these curvatures are inwardly experienced? Not an ounce of spiritual freedom is added to human agency by means of mere bodily augmentation. It’s like psychedelics, the very muscles that would need to be sensitized and developed are numbed. And I find it out of place to draw far-fetched Steiner parallels, when we know too well where his views point to, in terms of healing the body-soul-spirit.
Yes, of course it makes a difference, and that is also what I have been highlighting.
You asked about my associating his underlying feeling with the Christ impulse, because I suppose the mere mention of it made it seem like I was equating his approach to the spiritual scientific one (even though I have unequivocally detailed the flaws in his approach on numerous recent occasions). I have presented some concrete parallels between the Christ impulse and ML's feeling-based lines of reasoning. I also presented the Steiner quotes showing that ML's feeling for the meandering paths of the soul's post-Fall evolution and the resulting discontent with its bodily instrument is very intuitive. Of course, he (like everyone else unacquainted with Initiatic science) completely misses the wider context in which these intuitive facts present themselves, and that indeed leads to infernal goals, as I myself have detailed in various posts. That is always the result when a feeling for the Christ impulse lacks more integrated knowledge of trans-incarnational evolutionary rhythms.
You have not substantively commented on any of those points yet, except writing them off as a tendency to 'smooth continuity' and calling them 'far-fetched'. So I'm not sure if you are interested in objectively exploring those lines of reasoning for our benefit in the here and now (not to reach definitive conclusions about whether ML = Christ impulse), or not. Until you show that interest, I will avoid commenting further on this section of the discussion.
I still plan to reply to the other ML section.
Re: Cell Intelligence in Physiological & Morphological Spaces
Posted: Sat Nov 09, 2024 1:34 pm
by Federica
AshvinP wrote: ↑Sat Nov 09, 2024 1:09 am
You asked about my associating his underlying feeling with the Christ impulse, because I suppose the mere mention of it made it seem like I was equating his approach to the spiritual scientific one (even though I have unequivocally detailed the flaws in his approach on numerous recent occasions). I have presented some concrete parallels between the Christ impulse and ML's feeling-based lines of reasoning. I also presented the Steiner quotes showing that ML's feeling for the meandering paths of the soul's post-Fall evolution and the resulting discontent with its bodily instrument is very intuitive.
(...)
You have not substantively commented on any of those points yet, except writing them off as a tendency to 'smooth continuity' and calling them 'far-fetched'. So I'm not sure if you are interested in objectively exploring those lines of reasoning for our benefit in the here and now (not to reach definitive conclusions about whether ML = Christ impulse), or not. Until you show that interest, I will avoid commenting further on this section of the discussion.
Ashvin,
No I didn't ask about underlying feelings in particular. And no, it didn't seem like you were equating his apporach to spiritual science. As I have abundantly said in the part of my post that you have left out:
Federica wrote:I agree that those impulses are collective and shared, in a concrete way, and I try to keep that constantly in mind. Surely the way I have written the posts above may make me sound judgemental and distancing, I am aware of that. But in the exact same way that you rightly highlight the fact that we all partake in these evil impulses, and that we should not feel immune and above them, I highlight that my critique is only contingently directed to ML as an individual. It is first and foremost the impulse, the ideas themselves, and the feelings themselves, that needs to be called out, clarified, and made as conscious as possible, in all their conceptual surroundings as well.
I have certainly no hard feelings for the singular personas of ML, BK, and others. I rather deeply empathize with them, probably more than you can imagine. But I keep these things for myself. I don't see the usefulness of continually softening all the edges, continually trying to match what does not match. I think this attitude makes the understanding of the emotional and conceptual tensions, hence their recognition within ourselves, just way harder. The edges and the contrasts help characterize and recognize these key junctures. Like for example, that ML uses the words "bully" and "dump", repeatedly, has definite relevance. When I point that out, it helps understand the feeling-substrate from which his overall pursuits are steered. Now, when I use these same words on purpose, and for example say that ML wants to "bully" the cells to do this or that, it sounds judgmental and severe - I know.
I choose to do it, not as a sign of condemnation of an individual scientist, but because it is necessary to point out a peculiar state of mind which is very informative, and highly relatable, once noticed. Conversely, when the key lines of force are smeared out - the pinnacle being reached when it is considered equally relevant to call those goals infernal and also expression of the Christ impulse - the chances of gaining a vivid and relatable understanding are strongly diminished. Again, this doesn't mean that, just because I don't sing someone's praises in these posts, I also don't experience and nurture additional thoughts, and additional feelings, in relation to them. To insinuate that the impressions gathered from reading my posts are the exhaustive reflection of my entire attitude is insensitive, to say the least.
So I find your approach counterproductive. By continually attenuating vocabulary, concepts, and intents, and by continually suggesting-forcing Steiner into saying ambiguous things he didn't say - lest adding attenuated caveats between brackets - this approach blurs the possibilities of clear understanding. It erodes the chances to become keenly aware of those ideas and feelings that we need to distinguish, not blur, and become keenly aware of.
just because I haven’t written about the collective valence of the impulses that find expression in ML’s research, doesn’t mean that I ignore our, and mine, individual sharing in those attitudes and it doesn't mean that I despise and don't empathise with these scientists as individuals. Far from that.
In fact, you make it heavy and difficult to even come to the vicinity of discussing these things. When you keep everyone busy telling things apart, because you continually blur the waters, add Steiner quotes out of context and out of content, only to serve your intention to round all edges, smudge all lines, and match here and now all opposite that don’t match, you make it cumbersome and laborious to even come to the same page as a point of departure. Expectedly, your own understanding ends up blurred too, as we see in the question of ML’s reductionism or non reductionism. I will add that I am particularly saddened by the way you have recently come to utilize Steiner quotes, in this and in other threads. Similar to how Levin wants to “bully” (his choice of vocabulary for cells being guided towards goals from the top down) the lower layers into executing on new goals, you bully strings of words that you find in Steiner into serving the intention you have committed to, namely to reject all clear characterization, and find smooth, "redemptive" continuity in all opposites, now even to the extreme point of 'finding' the Christ impulse within a research approach that is driving towards the realization of infernal goals, by making parallels that are leterally misleading.
In these conditions, no wonder I have not “shown interest” in a larger discussion of the impulses that we, as human collective, express in research of the kind ML conducts. You are continually throwing heavy obstacles on that way. I am getting tired of this labour.
Re: Cell Intelligence in Physiological & Morphological Spaces
AshvinP wrote: ↑Fri Nov 08, 2024 7:28 pm
I will need to think about this more, because it's not clear to me how Levin could simultaneously feel the higher-order spaces are associated with cognitive perspectives exhibiting unique lawfulness (not reducible to elementary rules), and also feel that there is no overarching intention guiding evolution (and thus our current psycho-physical state) but rather we were dumped where we are by meandering, semi-random processes that we owe no allegiance to. Right now, it seems to me that Levin may intellectual think of himself as "non-reductionist" and call himself a "non-reductionist" when asked explicitly, but the feelings guiding his thoughts and future research are quite reductionist for the reason Cleric mentioned - he doesn't suspect cognition can be active with its gestures beyond the intellectual plane.
Nonono: if his research were guided by reductionist feelings or thoughts, he would simply never have discovered the properties he has discovered. It is precisely the choice of leaving behind the reductionist approach that allows him to have such groundbreaking experimental results. The non-reductionst perspective guides him towards running certain specific experiments that a reductionist would never have conceived. I think it's like that:
1. ML is only sure about what experiments can show - and about his own arbitrary perspective, but I won’t digress now.
2. The experiments tell him that there is cognition at every tested level. As he says, this is a fact that remains veiled for the reductionist, just because reductionism doesn't lead to running the right experiments. Only when the experiments are informed by a non reductionist perspective, these cognitive properties can emerge, like for example the capability of a skin cell to adapt to a new artificially imposed goal (from a higher level) and learn how to become another kind of cell with new behaviors and new problem solving capacities. Cognition is a continuum of abilities to adapt, optimize and coordinate behaviors towards the pursuit of goals. These goals are not simply built in the physical-chemical properties characterizing the level in question but are pivoted into that level from above. For example, a cell, or a set of cells, doesn't just roll along under the bottom-up effect of its physical-chemical constitution, but has additional veiled cognitive capacities, that only emerge when a new goal is assigned, under new environmental conditions. If you don't test it for that, the cognitive properties remain invisible.
3. As far as I know, the levels ML is currently able to experiment with are limited to human level and below (animals, organs, cells, …, proteins, …, AI, algorithms, ...)
4. ML infers that the same principle - no privileged scale of causation - may be true at any level, including the higher levels above man as well: society, language, culture,..., universe, .... Still, he has no experimental results there, so he doesn't know at the moment, nor - I would add - is he super interested, since what he wants is primarily to "bully" the lower layers (man and below) into working towards his goals, rather than letting them "be bullied by meandering evolution". And the way to obtain that is of course to alter - augment - the embodiment of these cognitive agents, be them at human, animal, cell, or other level.
I agree ML has the unique quality of resisting the tendency to let any philosophical framework prevent him from conducting his intuitive activity in new ways, to devise new research questions and experiments. To be clear, I'm not at all decided on ML's inner stance and it seems, precisely because he doesn't vocalize any specific philosophical framework that he thinks is supported by his work (except the broad fact that everything may be experiential/cognitive in nature), it's difficult to tell how he is thinking about the research results at any given time.
With that said, I think Cleric's blog post to ML may have pointed to possible reductionist ways of thinking that would still make sense of why he takes an open-ended research strategy. Let's be clear that there would be no reason for Cleric to write that particular post if all you said above is also 100% the case. He was pointing to the possibility that ML is subtly undermining his own principle of "no privileged scale of causation". I think that's very clear from the post:
Cleric wrote:If we imagine that higher-order processes are only the surprising behavior of simple ground rules iterated over and over, we have to get away with the idea of causative agency at these higher orders. Any such first-person sense of causative agency would have to be understood as an illusionary macro view of the ground rules which alone are responsible for the total behavior. There’s nothing in what a higher-order agent is ‘doing’ that steers the flow in a direction that is not already fully driven by the simple ground rules themselves. There’s nothing in the way the state is organized (whether there are higher-order forms or not) that feedbacks on the way the ground rules are applied. We can, of course, devise a more complicated ‘meta’ rule system, that at each step analyzes the state for higher-order structures and applies different rules accordingly, but are we really approaching in this way the *reality* of inner experience, or are we simply creating an intellectual monstrosity that is so general that it can eventually capture any possible form of computation (Wolfram’s Ruliad comes to mind)? In any case, no matter how complicated and convoluted our computational model is, it is still *flattened to a single plane of causation*. This plane is really the plane of our intellect. We are tempted to flatten the multilevel causative scales to a single scale because then our thinking being can fully ‘incarnate’ in that single plane and pretend that it understands how the illusions of other causative planes emerge. In other words, if the intellect is to ever be fully satisfied with its picture of reality, it needs to see all other planes as fully projected within its own plane (as mental images), and correspondingly project all causative forces within its own such that they can be mimicked by intellectual movements. Thus the initial insight of ‘no privileged scale of causation’ is undermined – it turns out that the intellect reduces all planes to movements of mental images in its own plane. Thus the idea of truly causally creative agencies at other scales becomes superfluous.
The functional reductionism always results from the mind container perspective, where the dynamics observed are understood in terms of our familiar intellectual gestures. This is a highly unconscious stance and the person may very well feel like they are attributing unique causative agency at difference scales, but functionally the scales all reduce to the intellectual plane of manipulating, combining, relating, etc. scientific-mathematical concepts, which can be simulated through iterations of CA/CGOL.
It's difficult enough for us to sense our own intellectual gestures, since they are generally merged tightly into the background while we conduct our philosophical-scientific thinking. It becomes almost literally impossible to imagine different sorts of inner gestures, for ex. the Archangels or Time Spirits, that would bend the flow at higher scales. That is true even if we are already acquainted with esoteric science and have explored these distinct orders of Be-ing conceptually - we may still find we are often imagining their flow-bending activity as something similar to our human scale of setting goals, perceiving things, manipulating objects, and so on. Let alone if we haven't really explored such ideas in detail. Then it turns out every 'unique scale of causation', whether imparting novel goals from above or below, is simply an elemental patchwork of familiar inner gestures.
This view would also tend to make sense of why ML is so ready to attribute qualities like 'delayed gratification' (sacrificial behavior) to algorithms and some kind of creative agency to upcoming AI systems. That would be unsurprising if "creative agency" is nothing more than the 'surprising behavior of fuzzy ground rules' when iterated enough times. The entities/processes governed by the ground rules are imbued with elementary cognition (similar to panpsychist philosophy) and the human scale, for ex., is simply a patchwork of such cognitive processes that has passively awakened at a different scale. Again, I'm not saying this is necessarily the case for ML, but I don't think it's a closed book, either. I also think experience shows that this is the default way philosophical-scientific thinking trends over time, the longer the first-person experiential perspective of spiritual activity remains in the blind spot.
Re: Cell Intelligence in Physiological & Morphological Spaces
Posted: Sat Nov 09, 2024 3:41 pm
by Federica
To somewhat depersonalize this topic and take away some 'pressure' from Levin, I am reading an Aeon short essay, published this week: "Elusive but everywhere - Everything in the Universe, from wandering turtles to falling rocks, is surrounded by ‘fields’ that guide and direct movement" by biologist Daniel McShea, presenting a theory very similar to MLs. This is called "field theory". Here the morphological spaces are called fields.
Consider a question that still perplexes biologists. Why are your arms pretty much the same length? Genes inside the cells of a developing left arm have, by themselves, no information about the length of the developing right arm. This means that, unless tightly controlled, the cells in one arm might divide a bit faster than those in the other. This kind of variation occurs all the time in the development of organisms. If such variation is possible, then how do our arms grow to the same length? The answer is not yet known. One strong possibility is that some field exists – biochemical or even electrical – which is in touch with both arms, encompassing the cells in each. Such a field could persistently guide the growth process toward arms of the same length.
The simplified explanation above barely begins to account for the full complexity of fields in goal-directed systems. In embryos, there are multiple fields at the scale of the entire developing organism directing various tissue-level mechanisms inside. In turn, those tissues can also act as fields, directing the cells within them. And these cells in turn can also act as fields, directing various molecule-level mechanisms inside them, and so on. In the most complex systems, multiple levels of entities are nested within multiple fields. This telescoping of levels extends upward as well. Whole organisms are nested within local ecological fields, which in turn are nested within larger ecologies, and so on.
and:
Finally, we turn to the most speculative application of field theory: human wants and intentions. If we’re right, then things like human culture and psychology – alongside all goal-directed phenomena – also involve direction by fields. That would mean there needs to be a hierarchical structure to human wanting. And this structure does seem to exist. Looking down, our cells and tissues have the same nested structure as other multicellular organisms. Looking up, we are individuals nested within and directed by small social ecologies (marriages, families, friend groups, etc), which in turn are nested within and directed by larger ones (economic, political and cultural entities), and so on.
Interesting, as the essay also explicitly refers to fields as regulating and goal-setting agents at the higher hierarchical levels, above man. Then the train of thoughts gets caught in tricky loops, such as:
This view of the mind, which dates back to the 18th-century philosopher David Hume, posits that our wants direct everything we deliberately think, say and do. Hume called our wants the ‘calm passions’ because, for him, thinking, speaking and acting are purely passive processes, having no goals of their own. We take a similar view: when we deliberately think about, say or do something, it is because some field, some want or intention, has motivated or directed us. Fields, and fields alone, motivate and direct.
and, on the inevitable question of free will:
But this view, which sees determinism and free will as being at odds, is mistaken. According to a philosophical school called compatibilism, even if the world is perfectly deterministic, freedom is perfectly possible. Field theory is a kind of ‘compatibilist’ explanation of goal directedness.
According to our theory, freedom is direction by the fields within us. There is a temptation to regard direction imposed on us from anywhere as the opposite of freedom, but field theory reminds us that many imposed fields are our own wants and are, therefore, quite literally, parts of us. And when wants originate inside us, they are our wants, and the decisions they motivate are our decisions, regardless of whether they are determined by the external world and the fields that make it up. In this view, freedom is not the total absence of deterministic causation – it would make no sense to be free of your own wants and intentions. In a very real way, your wants and intentions are you, and no one wants to be free of themselves.
The bold being where the reasoning hits the wall, because freedom in its true sense certainly entails that it makes complete sense to be free of our own wants and intentions, as we know. And that's all the difficulty.
Nonetheless, ideas such as the ones elaborated in this essay go to show how these impulses are becoming emergent in current science, pulling it away from a purely bottom-up understanding of reality (though still in the intellectual form of external modeling).
Re: Cell Intelligence in Physiological & Morphological Spaces
This view of the mind, which dates back to the 18th-century philosopher David Hume, posits that our wants direct everything we deliberately think, say and do. Hume called our wants the ‘calm passions’ because, for him, thinking, speaking and acting are purely passive processes, having no goals of their own. We take a similar view: when we deliberately think about, say or do something, it is because some field, some want or intention, has motivated or directed us. Fields, and fields alone, motivate and direct.
Thanks, this is indeed a great parallel to what was being discussed through the prism of ML. Leaving the latter aside, this is exactly the functional reductionism that happens when the intellectual modeling is relied upon as an "explanation" of goal-directed behavior, i.e. the behavior at all scales is flattened/reduced to familiar intellectual gestures. Our real-time activity, the very activity that is weaving together the explanatory model, is made into a passive aperture of its own model, i.e. the external nested fields that are guiding impulses, wants, etc. that result in the playback of our thinking, speaking, and acting.
Schopenhauer was astute to observe that "every man mistakes the limits of his own vision for limits of the World", which could also be conversely applied, "every man mistakes the possibilities of his own cognition for the possibilities of the World". In other words, because my thinking-speaking-acting is highly conditioned by shadowy wants and motivations, this must be the essential nature of thinking-speaking-acting at the human scale. The latter impart no goals of its own, not even the goal of trying to understand the role of imagined 'nested fields' in constraining its activity. All it would take is a flash of real-time insight into what it is doing to realize there is already an intimate place where it is imparting goals of its own.
Nicholas Smith had a relevant Substack post on this as well:
The Mechanism of Passions: Logismoi
In Orthodox thought, the concept of logismoi (complex thoughts that lead to passions) helps us understand how reactive behavior takes root. Each logismos arises as a thought coupled with an inclination or disposition toward an image that enters our mind—for instance, an image of a co-worker who called in sick today coupled with frustration over having to make up for her absence at work, or an image of a bigger house with a bigger yard that, if you had, would finally make you happy. When such a thought appears, unless we are practicing attentiveness, it begins to "play" out in the theater of our imagination. Just like when we’re sitting in a theater, we suspend our disbelief and automatically equate the inclination or attitude it arouses in us with our own desire, feelings, or judgments.
This "suspension of disbelief" transforms what could have been a mere fleeting image into something we perceive as necessary to our nature. However, these impulses—these reactive, disordered desires—are not inherent aspects of our identity. They start as logismoi, and by engaging with them, we allow them to implant themselves in our hearts, eventually becoming habits or instincts. The more we entertain these complex thoughts, the more they solidify into patterns that dictate our responses, often without our conscious consent.
To make this concept more tangible, consider a practical scenario: imagine seeing a friend's success on social media. A simple thought might be to acknowledge their achievement. However, a logismos might couple that image with a feeling of envy, suggesting that you are somehow lacking by comparison. If entertained, this logismos becomes a habitual response, reinforcing feelings of inadequacy and envy over time.
If we gain a better intuitive orientation to the logismoi at work in our philosophical-scientific thinking, namely some habit of reducing the nested causal activity onto the intellectual plane of familiar inner gestures (resulting in the feeling that these gestures are simply the playback of external processes, because that is indeed how our familiar experience unfolds), then we can better avoid the reduction gesture that refashions the World in our own current self-image. In other words, we so casually throw away our potential for inner freedom to some theory of 'compatibilism' because we implicitly assume that our real-time philosophizing is already free, not constrained by any hidden assumptions or prejudices.
Re: Cell Intelligence in Physiological & Morphological Spaces
Posted: Sat Nov 09, 2024 7:07 pm
by AshvinP
Federica wrote: ↑Sat Nov 09, 2024 3:41 pm
Nonetheless, ideas such as the ones elaborated in this essay go to show how these impulses are becoming emergent in current science, pulling it away from a purely bottom-up understanding of reality (though still in the intellectual form of external modeling).
Certainly we can see that thinking consciousness is growing into the etheric strata across many different domains of inquiry, which gives a sense that there is some depth of activity behind/within the phenomenal manifestations of life. I think you are correct that these sorts of ideas are proliferating in current science, as they previously did within philosophy, religion, and art.
On the other hand, without an intimate phenomenology of cognition, it becomes increasingly difficult for people to get a proper orientation to this depth of intentional activity. The condensation process by which these new insights flow into consciousness remains tightly laminated into the intuitive background, and therefore all the multiscale dynamics tend to be recast in the mold of familiar intellectual gestures over time.
I have always felt JP is one of the very few modern creative thinkers who guards against this tendency toward functional reductionism. I think that relates to his intimate exploration of cognitive activity, even to the point of exploring the psycho-spiritual underpinnings of the Biblical narratives. He also has a very well-rounded familiarity with the scientific literature across many disciplines and continually tries to integrate those disciplines with the Christ impulse as he sees it continuing to unfold its potential, not only in ancient times, but also in our current time. The latter has become the explicit underpinning of his intuitive orientation to the World flow.
It would be nice if JP could get into a conversation with Levin - I think the latter could especially benefit greatly from it, if he here willing to explore JP's intuitions. Right now, JP is going in the somewhat opposite direction of Levin - instead of finding human-like agency at all lower order scales, he has explored the way in which we imaginatively think and innovate enough to know what is uniquely human (the superconscious flow), and therefore the general inner axis along which genuine spiritual freedom can be found. That is expressed here, for example:
That clip is part of a larger discussion with the other JP which is highly illuminating and reveals a somewhat finely-tuned intuitive sensitivity for both of them to higher-order spiritual activity (which also provides the intuitive curvatures for elemental goal-directed activity), at least the fact that it cannot be found as contained within the sphere of our familiar intellectual gestures. Rather the latter can only be used as analogical portals that bring us into the ideal vicinity of the former. That is why, for example, they seem very comfortable exploring the hierarchy of Be-ing in terms of sociocultural, mythological, and religious examples, and feel this brings us closer to its true essence than the elemental dynamics we discover via natural science. I think they generally intuit that the latter can only be analogical portals to the higher-order activity in the same sense, and actually that the elemental dynamics are originally rooted in human spiritual activity. That is quite explicitly discussed here:
Re: Cell Intelligence in Physiological & Morphological Spaces
Federica wrote: ↑Fri Nov 08, 2024 9:07 pm
Nonono: if his research were guided by reductionist feelings or thoughts, he would simply never have discovered the properties he has discovered. It is precisely the choice of leaving behind the reductionist approach that allows him to have such groundbreaking experimental results. The non-reductionst perspective guides him towards running certain specific experiments that a reductionist would never have conceived. I think it's like that:
1. ML is only sure about what experiments can show - and about his own arbitrary perspective, but I won’t digress now.
2. The experiments tell him that there is cognition at every tested level. As he says, this is a fact that remains veiled for the reductionist, just because reductionism doesn't lead to running the right experiments. Only when the experiments are informed by a non reductionist perspective, these cognitive properties can emerge, like for example the capability of a skin cell to adapt to a new artificially imposed goal (from a higher level) and learn how to become another kind of cell with new behaviors and new problem solving capacities. Cognition is a continuum of abilities to adapt, optimize and coordinate behaviors towards the pursuit of goals. These goals are not simply built in the physical-chemical properties characterizing the level in question but are pivoted into that level from above. For example, a cell, or a set of cells, doesn't just roll along under the bottom-up effect of its physical-chemical constitution, but has additional veiled cognitive capacities, that only emerge when a new goal is assigned, under new environmental conditions. If you don't test it for that, the cognitive properties remain invisible.
3. As far as I know, the levels ML is currently able to experiment with are limited to human level and below (animals, organs, cells, …, proteins, …, AI, algorithms, ...)
4. ML infers that the same principle - no privileged scale of causation - may be true at any level, including the higher levels above man as well: society, language, culture,..., universe, .... Still, he has no experimental results there, so he doesn't know at the moment, nor - I would add - is he super interested, since what he wants is primarily to "bully" the lower layers (man and below) into working towards his goals, rather than letting them "be bullied by meandering evolution". And the way to obtain that is of course to alter - augment - the embodiment of these cognitive agents, be them at human, animal, cell, or other level.
I agree ML has the unique quality of resisting the tendency to let any philosophical framework prevent him from conducting his intuitive activity in new ways, to devise new research questions and experiments. To be clear, I'm not at all decided on ML's inner stance and it seems, precisely because he doesn't vocalize any specific philosophical framework that he thinks is supported by his work (except the broad fact that everything may be experiential/cognitive in nature), it's difficult to tell how he is thinking about the research results at any given time.
With that said, I think Cleric's blog post to ML may have pointed to possible reductionist ways of thinking that would still make sense of why he takes an open-ended research strategy. Let's be clear that there would be no reason for Cleric to write that particular post if all you said above is also 100% the case. He was pointing to the possibility that ML is subtly undermining his own principle of "no privileged scale of causation". I think that's very clear from the post:
Cleric wrote:If we imagine that higher-order processes are only the surprising behavior of simple ground rules iterated over and over, we have to get away with the idea of causative agency at these higher orders. Any such first-person sense of causative agency would have to be understood as an illusionary macro view of the ground rules which alone are responsible for the total behavior. There’s nothing in what a higher-order agent is ‘doing’ that steers the flow in a direction that is not already fully driven by the simple ground rules themselves. There’s nothing in the way the state is organized (whether there are higher-order forms or not) that feedbacks on the way the ground rules are applied. We can, of course, devise a more complicated ‘meta’ rule system, that at each step analyzes the state for higher-order structures and applies different rules accordingly, but are we really approaching in this way the *reality* of inner experience, or are we simply creating an intellectual monstrosity that is so general that it can eventually capture any possible form of computation (Wolfram’s Ruliad comes to mind)? In any case, no matter how complicated and convoluted our computational model is, it is still *flattened to a single plane of causation*. This plane is really the plane of our intellect. We are tempted to flatten the multilevel causative scales to a single scale because then our thinking being can fully ‘incarnate’ in that single plane and pretend that it understands how the illusions of other causative planes emerge. In other words, if the intellect is to ever be fully satisfied with its picture of reality, it needs to see all other planes as fully projected within its own plane (as mental images), and correspondingly project all causative forces within its own such that they can be mimicked by intellectual movements. Thus the initial insight of ‘no privileged scale of causation’ is undermined – it turns out that the intellect reduces all planes to movements of mental images in its own plane. Thus the idea of truly causally creative agencies at other scales becomes superfluous.
The functional reductionism always results from the mind container perspective, where the dynamics observed are understood in terms of our familiar intellectual gestures. This is a highly unconscious stance and the person may very well feel like they are attributing unique causative agency at difference scales, but functionally the scales all reduce to the intellectual plane of manipulating, combining, relating, etc. scientific-mathematical concepts, which can be simulated through iterations of CA/CGOL.
It's difficult enough for us to sense our own intellectual gestures, since they are generally merged tightly into the background while we conduct our philosophical-scientific thinking. It becomes almost literally impossible to imagine different sorts of inner gestures, for ex. the Archangels or Time Spirits, that would bend the flow at higher scales. That is true even if we are already acquainted with esoteric science and have explored these distinct orders of Be-ing conceptually - we may still find we are often imagining their flow-bending activity as something similar to our human scale of setting goals, perceiving things, manipulating objects, and so on. Let alone if we haven't really explored such ideas in detail. Then it turns out every 'unique scale of causation', whether imparting novel goals from above or below, is simply an elemental patchwork of familiar inner gestures.
This view would also tend to make sense of why ML is so ready to attribute qualities like 'delayed gratification' (sacrificial behavior) to algorithms and some kind of creative agency to upcoming AI systems. That would be unsurprising if "creative agency" is nothing more than the 'surprising behavior of fuzzy ground rules' when iterated enough times. The entities/processes governed by the ground rules are imbued with elementary cognition (similar to panpsychist philosophy) and the human scale, for ex., is simply a patchwork of such cognitive processes that has passively awakened at a different scale. Again, I'm not saying this is necessarily the case for ML, but I don't think it's a closed book, either. I also think experience shows that this is the default way philosophical-scientific thinking trends over time, the longer the first-person experiential perspective of spiritual activity remains in the blind spot.
Ashvin, Cleric,
I have now read through the 2 blog posts by Levin on the experiment with the sorting algorithms, considered again your and Cleric's comments, and the fragment when BK objects about "epistemic projection". I now have a somewhat more substantiated opinion on the question of Levin's reductionism. I have still not advanced on the CJ TOE interview and other material, so I don't intend the following as something definitive, but I do have an idea. In short:
a. If we want to call Levin's lack of spiritual-scientific awareness and flattening of this cognitive inquiry on the intellectual level "functional reductionism" that's fine, we can surely do that. We've been consistently in agreement on that. He surely tries to grasp the vertical spaces by their projections on the intellectual-conceptual plane in the form of mental images, from his vantage point. The fact that he regularly reflects on various observer's viewpoints doesn't help, because this reflection becomes in turn an additional mental image on top of all the other ones, and the moment he's back into the lab he's again in the armchair position, drawing mental pictures to represent the patterns of activity of all layers. In other words, bistability/hysteresis rules.
b. However, functional reductionism is different from the reductionism you (Ashvin) were attributing to Levin - that he's continually tempted to conceive only bottom-up causality. There is a way to conceive continually feedbacking bottom-up and top-down interconnected causality that is still fully intellectualized. And I still think Levin is in this exact posture. He is entirely open to the idea of causative agency of the higher orders, while you both think that in the end he superstitiously attributes agency/causation only to the lower levels, bottum-up.
c. I am now even more convinced that the 1.2.3.4 points I listed above are correct. I think a revealing point is his end goal with the experiments. At the end of the day, the intentions are integral to his choices, methods, thoughts, and feelings, and they clarify the direction of his research. When he runs the sorting algorithm experiments, for instance, he does it as a model of morphogenesis. That's the title of the paper by the way. His goal is to eviscerate an as-simple-as-possible system, in all sort of unconventional novel ways, hoping to find hidden properties that can point to useful strategies to be later searched for, tested and exploited, within the more complex cellular space. He uses a sorting algorithm because not only is it simple, but also sorting is what the cells do, once they are extracted from a certain anatomical space and artificially plugged into another one with other goals and features. They sort themselves according to the new rules coming down onto them from above. Like: "Stop working towards making frog eyes, now make frog legs". What he wants from the sorting algorithm experiments are possible insights to apply to cells, so that their plasticity towards adapting to new goals can be better facilitated from the top down (by the researcher). By uncovering what unsuspected properties (not will) an algorithm may exhibit that are not immediately apparent to, predicted by, or built in by the coder, he can search for, and possibly leverage, corresponding hidden properties in real cells, so that they can be better "bullied" into, say, regrow an arm for an amputee. This is what he’s focused on. As I said in point 4 above, he's completely open to the possibility that the higher levels structure the lower ones, including above the level of the human brain (and he obviously thinks this definitely happens top down at the lower levels: organ to tissue, tissue to cell, etcetera). Simply, he is not very focused on that, just because his research goals lead him elsewhere. BK is focused on "ontology" versus "epistemic projection", but ML cares much less about all that. He is focused on practical usefulness.
d. Having looked at the algorithm experiments, I am not sure I find what you said, Cleric. I don't think that Levin "imagines that higher-order processes are only the surprising behavior of simple ground rules iterated over and over" as you put it. I may be wrong, but here's what I think. When Cleric says:
Cleric wrote: ↑Sat Sep 14, 2024 7:38 pmThe whole sorting algorithms simulation can be symbolized at a higher level of abstraction as a function, let's say f. The initial distribution of the numbers can be represented as x. Thus, f(x) is the applying of one step of the simulation. Then we take the result and apply the simulation step again, thus we have f(f(x)).
I don't think the bold is what the experiment entails. There is no f(f(x)). If the whole sorting algorithm is f(x) - that is, the whole process that leads from say 3,1,2,5,4 to 1,2,3,4,5, then that's the end of the experiment. There is no iteration. Then Levin just goes home, and the next day he throws the dice again, to get a new initial configuration, say 4,1,3,2,5, and runs the same f(x) again, on this new "x". There is no f(f(x)). Now, f(x) is a well known piece of coding traditionally conceived and used by coders to obtain an ordered set from an initially disordered set of numbers. So f(x) is the higher-order causal input that defines the space the numbers (but we know Levin thinks in terms of cells) will have to move into. Here Levin obviously recognizes that what the numbers-cells will do depends (not only but) strongly on the making of the code. Of course he does. But what he is eager to find is usable properties that can be highjacked to later bully cells. To this effect, he shifts the focus from the top-down perspective and he distributes the code onto each number-cell, rather than keeping the top-down perspective of the coder. He also adds the feature that some numbers would just refute to do the permutation when prompted. He adds that because, again, this is what would come in very handy with his other bio-works, if it turned out that there is a hidden property of the numbers-cells to still complete the sorting activity via the same unchanged code (physical-chemical constitution for cells) even in presence of obstacles. He doesn't do that to negate that the higher-orders structure the activity of the lower ones. Of course, the given code is the overarching goal structuring the activity of the numbers. He doesn't deny that. He doesn't say that all comes from the bottom-up or anything like that. He is just not focused on that. For his purposes, he already masters the top-down, at the level that interests him. What he wants to inquire is what hidden bottom-up feedback properties he may find and take advantage of. He has his very arbitrary goals. Cleric you said:
Cleric wrote: ↑Sat Sep 14, 2024 7:38 pm
ML tried to point out that for the structures it makes a difference whether they see themselves at a higher level of abstraction or not, yet he agrees that in the end the fundamental rules are all the same and at the lowest level. This is the point that really hurts me and which it seems ML consistently overlooks.
Is he saying that? I don't see it. It starts at the minute 38:22 below. I don't see a pernicious superstition in what he says, just because he's not interested in attributing will or anything similar to the algorithm. He just observes that the aggregation value of the numbers-cells by algotype is strangely above average (see the paper page 34: file:///C:/Users/feder/Desktop/Manuscript.pdf) within the workings of f(x), not as a result of an iteration, and that's a hidden property the algorithm exhibits - as said, once the cells are made self-motioned and unresponsive cells are introduced. In between the initial random sorting, and the final ordered sorting, unexpected properties appear, while the ordered sorting is still obtained. That's all. We can call that cognition, we can call it "nano-goal-directness" as Levin suggests, we can call it however we want. Levin doesn't care. He is not superstitious here. He's just found a usable feature to refine the efficacy of his future bio-works. The only thing he cares about is to imagine the most cleverly unheard-of experiments scenarios, hoping to gather exploitable insights to scale up from sorting algoritms to sorting cells, in order to roll out his freedom of embodiment capability for all the cognitive agents he cares about.
In Cleric's comment on Levin's blog post, I think the second part is the relevant critique, from: "No matter how complicated a model is, it is still flattened to a single plane of causation - the plane of the intellect". Alas, despite the comment being remarkably written, I think that getting it by a flash of insight is very difficult, not to say requiring a miracle, without any spiritual scientific context. If a natural scientist were to get what it's meant by "being that seeks its home in the *inner experience* of the Cosmic scales", he would set up a meditation room within the laboratory and spend time there every day. How could he get what is meant by "inner experience"? Realistically, one would perhaps think it's a suggestion to do more research from the first-person observer perspective, not that one has to work out a completely new cognitive state through meditation.
Re: Cell Intelligence in Physiological & Morphological Spaces
Posted: Sun Nov 10, 2024 12:54 am
by AshvinP
Thanks for providing detailed thoughts on Levin's method, Federica. You have certainly contemplated them much more than I have and I think Cleric will be much better equipped to respond. I will just ask about this part:
Federica wrote: ↑Sat Nov 09, 2024 9:59 pm
When Cleric says:
Cleric wrote: ↑Sat Sep 14, 2024 7:38 pmThe whole sorting algorithms simulation can be symbolized at a higher level of abstraction as a function, let's say f. The initial distribution of the numbers can be represented as x. Thus, f(x) is the applying of one step of the simulation. Then we take the result and apply the simulation step again, thus we have f(f(x)).
I don't think the bold is what the experiment entails. There is no f(f(x)). If the whole sorting algorithm is f(x) - that is, the whole process that leads from say 3,1,2,5,4 to 1,2,3,4,5, then that's the end of the experiment. There is no iteration. Then Levin just goes home, and the next day he throws the dice again, to get a new initial configuration, say 4,1,3,2,5, and runs the same f(x) again, on this new "x". There is no f(f(x)). Now, f(x) is a well known piece of coding traditionally conceived and used by coders to obtain an ordered sort from an initially disordered set of numbers. So f(x) is the higher-order causal input that defines the space the numbers (but we know Levin thinks in terms of cells) will have to move into. Here Levin obviously recognizes that what the numbers-cells will do depends (not only but) strongly on the making of the code. Of course he does. But what he is eager to find is usable properties that he can highjack to later bully cells. To this effect, he shifts the focus from the top-down perspective and he distributes the code onto each number-cell, rather than keeping the top-down perspective of the coder. He also adds the feature that some numbers would just refute to do the permutation when prompted. He adds that because, again, this is what would come in very handy with his other bio-works, if it turned out that there is a hidden property of the number-cells to still complete the sorting activity via the same unchanged code (physical-chemical constitution for cells) even in presence of obstacles. He doesn't do that to negate that the higher-orders structure the activity of the lower ones. Of course, the given code is the overarching goal structuring the activity of the numbers. He doesn't deny that. He doesn't say that all comes from the bottom-up or anything like that. He is just not focused on that. For his purposes, he already masters the top-down, at the level that interests him. What he wants to inquire is what hidden bottom-up feedback properties he may find and take advantage of. He has his very arbitrary goals.
What do you make of ML's response to me on the blog, though? He doesn't deny that either the sorting algorithm or the 'fractal IFS' may exhibit DG as a cognitive capacity, and in fact kind of predicts his tests for this cognitive capacity in the fractal IFS may be successful.
If we believe that human brains obey the laws of chemistry, then one can say that their activity is also describable by some (very complex IFS). [note how it's simply assumed that the brain as neurochemical system generates our cognitive capacity] If being describable by an IFS rules out cognition, then there is no cognition in the physical universe, including in humans. Fortunately, these two things are not mutually exclusive. The fractal IFS you describe has not been tested for delayed gratification (DG) but we’re testing it soon. We can’t know in advance (yet) if something will have that property or not, you have to try it. For example no one knew that sorting algorithms would have it, and when I first polled people (prior to doing the experiment), no one thought it would. I would claim that it’s superstitious to assume systems won’t have specific capabilities without testing them. The thing about my definitions of cognition and its various competencies, like DG), is that they are very practical, empirical observable properties – you confront them with a specific problem and see if they temporarily move further from their goal in order to recoup gains later. Many systems won’t do that. For example, 2 magnets separated by a long piece of wood – in order to get together, one would have to move around the wood, temporarily getting further from the other, in order to go around and finally meet it. Even some animals won’t do it – I’ve seen 2 dogs trying to get at each other through a fence, with a hole in the fence just 2 meters away, but going there means moving away from the attraction object and they couldn’t do it. So, there’s nothing superstitious about it – we did an experiment, tested a process for the ability to move against its normal gradient when confronted with a barrier (a broken cell that won’t be moved), and found it. Other systems won’t have it. But the fact that all such systems can be described at the lowest level (machine code, or chemistry, or whatever) doesn’t reduce the reality of their capabilities. Because, what’s emergent from some kinds of rules is not only complexity but different degrees of problem-solving. And not knowing about those capabilities, and knowing only the lowest-level rules, leaves a lot on the table in terms of understanding and using those systems. Will there be a test which shows that fractal processes such as IFSs can do it too? Place your bets now, before we do the experiment.
The red is also what you pointed out, which is what Cleric referred to as the 'behaviorist mode' - if it quacks like a duck, it's a duck. It makes things very simple if we define "cognition and its various competencies" to be whatever properties seem to us to be cognitive in character. It's almost like the naive realism of the materialistically minded person but in a different direction - instead of assuming rocks, plants, etc. are mindless objects because they don't immediately appear to move or do anything intelligent, we assume sorting algorithms have some kind of basic DG capacity because they appear to surprisingly exhibit a property that looks like it to us. Again, I initially didn't catch this 'epistemic projection' either and actually thought BK was being too prejudicial based on his metaphysical outlook of 'blind instinctive MAL', but my perspective on that has shifted in light of the additional facts, including ML's response above. His response had immediately reminded me of this from Steiner:
Steiner wrote:I have previously told you what all such experiments amount to. I once told you there is a certain plant, called the “Venus fly-trap” which immediately contracts its leaves when they are touched. Just as you make a fist of your hand when you are going to be touched—that is, when somebody means to give you a blow—so the Venus fly-trap waits for the insect and then shuts itself up. Then people say: this plant, the Venus fly-trap, has a soul like men have. It is aware of the arrival of the insect and shuts itself up.
Yes, gentlemen, but I always say: I know of a certain arrangement so constituted that when an animal approaches it and touches something inside it, then it immediately shuts up and the animal is caught. This is a mouse-trap! If one ascribes a soul to the Venus fly-trap, one must equally ascribe one to the mouse-trap! If one ascribes sight to the bees because they do something or other in ultra-violet light, then one ought to ascribe sight to barium platino-cyanide as well!
...As to the facts, the experiments are correct, but one must be clear that one cannot draw conclusions such as Forel and Kühn have actually done. To do so is a totally thoughtless way of following up the experiments. Then people say: “this has been proved beyond contradiction.” Naturally, but only for those who ascribe a soul to the mouse-trap! But for others who know how far one can go, how far one is able to think in such a way that things are rightly followed up, these proofs are by no means beyond contradiction.
But Steiner didn't see ML coming - he is the guy who is willing to ascribe a soul to the mouse-trap
The blue states it more explicitly, from what I can tell - that all we need is ground rules (machine code, chemistry, or whatever) with some built-in statistical flexibility and, from that, can emerge all sorts of complex cognitive capacities. That said, I am open to the possibility that I am still misunderstanding the thrust of his methods and comments like I was before, but I think a lot of things would need to be cleared up for me before I could go back to thinking there is no epistemic projection involved. Especially looking back on the blog post and finding this at top:
Also, I realize ontology is not his interest and he simply wants to see what clever 'hacks' can emerge from these experiments, but it seems to me he is subtly sneaking in ontology without realizing it. The behaviorist mode simply assumes too much about what we can conclude from these properties, which is what I tried to point out in my response. If we arbitrarily limit our study of cognitive activity at these scales to observable properties, we have snuck in a bottom-up ontology without realizing it. In fact, this is one of the few times that some sound philosophical/metaphysical thinking may actually help as a counterbalance - a prompt to open up our imaginative horizon to unseen but still concretely real inner gestures, of which our computational models are only imaginative symbols.
PS - So I started looking into this other post linked in the algo blog post, and his way of thinking about the surprising 'basal cognition' and how it can shed light on "how our own complex cognition evolved" is made quite explicit there - https://thoughtforms.life/what-do-algor ... ed-places/
Re: Cell Intelligence in Physiological & Morphological Spaces
Posted: Sun Nov 10, 2024 8:31 am
by Federica
AshvinP wrote: ↑Sun Nov 10, 2024 12:54 am
What do you make of ML's response to me on the blog, though? He doesn't deny that either the sorting algorithm or the 'fractal IFS' may exhibit DG as a cognitive capacity, and in fact kind of predicts his tests for this cognitive capacity in the fractal IFS may be successful.
We can call it "DG (delayed gratification) as a cognitive capacity", we can call it nano-proto-goal-directedness, we can call it mathematical property, when exhibited by an algorithm... it doesn't matter. This for Levin simply refers to the observation that, when you pick the sorting algorithm and, instead of treating it like a piece of code, you treat it like a piece of biology that navigates a space to perform a task, you observe the property that the algorithm successfully navigates the space and successfully completes the sorting task, while also exhibiting some property/behavior that was not intended by the higher-level order (the coder). The two dogs he provides as example do not complete the task successfully (they do not effectively get at each other) because they can't delay that gratification through exhibiting novel properties/behaviors (take a step back, notice the hole in the fence, and use it, to work around the obstacle). But the algorithm somehow still manages to complete the task pivoted down into its level by the higher-order layer, despite the obstacles (unresponsive numbers) while also exhibiting a novel property/behavior along the way (aggregation value significantly above randomness). That's it. There's no superstitious conclusion that this is human-like, or animal-like, or cell-like cognition. It's a mere observation of a feature. Let's remember his definition of cognition: the capacity to problem-solve. Since his sense is that useful properties can only be spotted when we "take seriously" as he would say, the idea of "no privileged scale of causation" - thus we treat an algorithm as a piece of biology for example - he likes to call the property DG, no matter in what space it's manifesting, including the algorithmic space. That's all.
If we believe that human brains obey the laws of chemistry, then one can say that their activity is also describable by some (very complex IFS). [note how it's simply assumed that the brain as neurochemical system generates our cognitive capacity]
No no it's not assumed that the brain generates cognition. You are reading much more than it is written. Let's have a look at your comment from his perspective. First, your comment is inaccurate: you tell him about an f(f(x)) that is not there (plus you kind of explain to him what a function is I know you copied Cleric's wording from a previous post, still it's your responsibility when you don't look yourself into the details before commenting); then you tell him that two algotypes tend to group together, which is not the case (the case is that the numbers tend to group together by algotype); then you tell him that he superstitiously imagines that "some forces or strings pull the points together", which he does not do at all, as explained above. But, you also tell him about, and show him, an IFS algorithm, which he happens to be precisely about to run experiments on. He is not a confrontational guy, he prefers to highlight the common points. Remember, he is on this blog in a divulgation stance, his goal there is to rally as many as possible stakeholders, audience, to his pursuits. So he doesn't take issue with the details, but does want to prep his way for the IFS experiments so you will be convinced in case he finds usable and unexpected properties there. At this point he doesn't know if it will be the case (he does not predict anything, he's just open to the possibility).
Now look at his thought that human brains obey the laws of chemistry. He is not implying that human brains are not also structured by higher-level orders. You should not read what's not there. He is only trying to take you to the idea that it's powerful to use simpler systems like IFS - in which it's simpler to discover possible hidden properties - to mimic, or describe, the more complex ones, in order to steer behaviors more powerfully - in novel ways - in those more complex systems, like a brain. All this within the context of the idea of spectrum of cognitive capabilities across layers. Spectrum means: even an algorithm should be taken seriously, as a potentially cognitive agent. Otherwise we would miss out on its useful hidden properties. That's all.
Ashvin wrote:The red is also what you pointed out, which is what Cleric referred to as the 'behaviorist mode' - if it quacks like a duck, it's a duck. It makes things very simple if we define "cognition and its various competencies" to be whatever properties seem to us to be cognitive in character. It's almost like the naive realism of the materialistically minded person but in a different direction - instead of assuming rocks, plants, etc. are mindless objects because they don't immediately appear to move or do anything intelligent, we assume sorting algorithms have some kind of basic DG capacity because they appear to surprisingly exhibit a property that looks like it to us. Again, I initially didn't catch this 'epistemic projection' either and actually thought BK was being too prejudicial based on his metaphysical outlook of 'blind instinctive MAL', but my perspective on that has shifted in light of the additional facts, including ML's response above.
I don't agree that Levin thinks: "if it quacks like a duck, it's a duck". His reasoning is rather: "if it quacks when we treat it like a duck, whilst it's actually lower-level than a duck, then we can maybe use its newly discovered properties in the real duck". It's quite different. Honestly, you are being attracted by Clerics thoughts here. But I suspect Cleric has just gone through the whole thing too fast. You persist to imagine that the new property makes Levin conclude that the algorithm has some sentience. But he does not make that conclusion. Frankly, this is your own epistemic projection. Levin has no interest in going that far. He's not a philosopher, he doesn't care about that, nor does he need to go there. He just stops at the observation of the DG capacity of the algorithm, as I have illustrated above.
So you see, I understand Levin's thinking gestures well, precisely because I am myself very familiar with them. And I do sympathize with his research perspective (I learned it from economics and it's also very sympathetic with my rational side) though I obviously know by now it becomes very dangerous when brought to expansion as he does. In terms of the Steiner example, same thing: Levin is not willing to ascribe a soul to the mouse-trap. Absolutely not He's really careful not to go there. The same also applies to the screenshot you shared, and to the second blogpost you reference in PS too. I've read all that, and if you read it from the perspective I have illustrated, you will see.
The blue states it more explicitly, from what I can tell - that all we need is ground rules (machine code, chemistry, or whatever) with some built-in statistical flexibility and, from that, can emerge all sorts of complex cognitive capacities. That said, I am open to the possibility that I am still misunderstanding the thrust of his methods and comments like I was before, but I think a lot of things would need to be cleared up for me before I could go back to thinking there is no epistemic projection involved.
Again, you are reading what's not there. The blue is not explicit at all in the sense you mean. And I think I have explained to a fair extent those "lots of things" that needed to be cleared up. Emergent problem solving simply means: the numbers in the algorithm solve the problem of the obstacles we threw into the system, and still successfully complete the sorting task. And that's emergent, because the coder didn't expect the feedback property to happen within his system (emergent aggregation concomitant with successful sorting) less than less did he try to build that property within the system.
Re: Cell Intelligence in Physiological & Morphological Spaces
AshvinP wrote: ↑Sun Nov 10, 2024 12:54 am
What do you make of ML's response to me on the blog, though? He doesn't deny that either the sorting algorithm or the 'fractal IFS' may exhibit DG as a cognitive capacity, and in fact kind of predicts his tests for this cognitive capacity in the fractal IFS may be successful.
We can call it "DG (delayed gratification) as a cognitive capacity", we can call it nano-proto-goal-directedness, we can call it mathematical property, when exhibited by an algorithm... it doesn't matter. This for Levin simply refers to the observation that, when you pick the sorting algorithm and, instead of treating it like a piece of code, you treat it like a piece of biology that navigates a space to perform a task, you observe the property that the algorithm successfully navigates the space and successfully completes the sorting task, while also exhibiting some property/behavior that was not intended by the higher-level order (the coder). The two dogs he provides as example do not complete the task successfully (they do not effectively get at each other) because they can't delay that gratification through exhibiting novel properties/behaviors (take a step back, notice the hole in the fence, and use it, to work around the obstacle). But the algorithm somehow still manages to complete the task pivoted down into its level by the higher-order layer, despite the obstacles (unresponsive numbers) while also exhibiting a novel property/behavior along the way (aggregation value significantly above randomness). That's it. There's no superstitious conclusion that this is human-like, or animal-like, or cell-like cognition. It's a mere observation of a feature. Let's remember his definition of cognition: the capacity to problem-solve. Since his sense is that useful properties can only be spotted when we "take seriously" as he would say, the idea of "no privileged scale of causation" - thus we treat an algorithm as a piece of biology for example - he likes to call the property DG, no matter in what space it's manifesting, including the algorithmic space. That's all.
I don't think so, Federica. I think you are missing the clear implications of everything he is saying, writing, and doing, along with the explicit meaning of his comments. You are instead adding rationalizations on his behalf, instead of sticking with the plain research and comments.
The idea to test algorithms in the first place came because he wanted to eliminate as much possible top-down influence as possible. The experiment was explicitly designed to further reduce the top-down influence by distributing the algorithm to each individual cell (number), introducing barriers into the sorting process, and distributing two different algorithms. Levin believes that, in this way, he is actually isolating the 'algotypes' to a large extent from human top-down structuring and giving them a chance to display their own inherent problem-solving capacities. He is "frankly shocked" at the results. He believes he has stumbled upon a remarkable latent capacity of inorganic systems that we humans were just too prejudiced and too myopic to discover before.
Perhaps it would be useful if we explore the problems with the bold, i.e. why it assumes way too much to conclude there are novel properties/behaviors emerging from the sorting algorithm, relatively independent of top-down causality, given these experimental conditions. What is a 'sorting algorithm', to begin with, from a phenomenological perspective? How does it relate to our own cognitive activity?
Federica wrote:
If we believe that human brains obey the laws of chemistry, then one can say that their activity is also describable by some (very complex IFS). [note how it's simply assumed that the brain as neurochemical system generates our cognitive capacity]
No no it's not assumed that the brain generates cognition. You are reading much more than it is written. Let's have a look at your comment from his perspective. First, your comment is inaccurate: you tell him about an f(fx)) that is not there (plus you kind of explain to him what a function is I know you copied Cleric's wording from a previous post, still it's your responsibility when you don't look yourself into the details before commenting); then you tell him that two algotypes tend to group together, which is not the case (the case is that the numbers tend to group together by algotype); then you tell him that he superstitiously imagines that "some forces or strings pull the points together", which he does not do at all, as explained above. But, you tell him about, and show him, an IFS algorithm, which he happens to be precisely about to run experiments on. He is not a confrontational guy, he prefers to highlight the common points. Remember, he is on this blog in a divulgation stance, his goal there is to rally as many as possible stakeholders, audience, to his pursuits. So he doesn't take issue with the details, but does want to prep his way for the IFS experiments so you will be convinced in case he finds usable and unexpected properties there. At this point he doesn't know if it wl be the case (he does not predict anything, he's just open to the possibility).
Now look at his thought that human brains obey the laws of chemistry. He is not implying that human brains are not also structured by higher-level orders. You should not read what's not there. He is only trying to take you to the idea that it's powerful to use simpler systems like IFS - in which it's simpler to discover possible hidden properties - to mimic, or describe, the more complex ones, in order to steer behaviors more powerfully - in novel ways - in those more complex systems, like a brain. All this within the context of the idea of spectrum of cognitive capabilities across layers. Spectrum means: even an algorithm should be taken seriously, as a cognitive agent. Otherwise we'll miss out on its useful hidden properties. That's all.
It's not inaccurate from what I can tell. The sorting algorithm takes an input state which leads to a certain output, and then that output state becomes the new input. That's f(f(x)). The experiment makes it so that this is applied at the scale of each individual cell and iterated many times, not the cells (unordered sequence) as a whole. Perhaps it's not technically "fractal IFS", but I don't see a huge principle difference 'at a higher level of abstraction', as Cleric said.
ML does not correct anything I (or Cleric) wrote, like you are doing right now. In fact he takes it as an accurate description and then proceeds to explain why it's not superstition to test sorting algorithms, CA, or IFS for cognitive capacities and conclude those capacities from their cognitive-seeming behaviors. (I see you have tried to explain that with "goal is to rally possible stakeholders", but I don't buy it). I give ML at least enough credit as a scientist interested in the truth that he will not abide completely inaccurate descriptions of what he is doing and concluding.
In fact, in the post I linked, ML states "after what we saw in this study, I am motivated to start looking for goal-directed closed-loop activity in CAs as well, who knows". Again, he was "frankly shocked" by the results of this study.
I am curious, what do you think these 'useful hidden properties' could be?
Federica wrote:
Ashvin wrote:The red is also what you pointed out, which is what Cleric referred to as the 'behaviorist mode' - if it quacks like a duck, it's a duck. It makes things very simple if we define "cognition and its various competencies" to be whatever properties seem to us to be cognitive in character. It's almost like the naive realism of the materialistically minded person but in a different direction - instead of assuming rocks, plants, etc. are mindless objects because they don't immediately appear to move or do anything intelligent, we assume sorting algorithms have some kind of basic DG capacity because they appear to surprisingly exhibit a property that looks like it to us. Again, I initially didn't catch this 'epistemic projection' either and actually thought BK was being too prejudicial based on his metaphysical outlook of 'blind instinctive MAL', but my perspective on that has shifted in light of the additional facts, including ML's response above.
I don't agree that Levin thinks like "if it quacks like a duck, it's a duck". His reasoning is rather: "if it quacks when we treat it like a duck, whilst it's actually lower-level than a duck, then we can maybe use its newly discovered properties in the real duck". It's quite different. Honestly, you are being attracted by Clerics thoughts here. But I suspect Cleric has just gone through the whole thing too fast. You persist to imagine that the new property makes Levin conclude that the algorithm has some sentience. But he does not make that conclusion. Frankly, this is your own epistemic projection. Levin has no interest in going that far. He's not a philosopher, he doesn't care about that, nor does he need to go there. He just stops at the observation of the DG capacity of the algorithm, as I have illustrated above.
So you see, I understand Levin's thinking gestures well, precisely because I am myself very familiar with them. And I do sympathize with his research perspective (I learned it from economics and it's also very sympathetic with my rational side) though I obviously know by now it becomes very dangerous when brought to expansion as he does. In terms of the Steiner example, same thing: Levin is not willing to ascribe a soul to the mouse-trap. Absolutely not He's really careful not to go there. The same also applies to the screenshot you shared, and to the second blogpost you reference in PS too. I've read all that, and if you read it from the perspective I have illustrated, you will see.
The blue states it more explicitly, from what I can tell - that all we need is ground rules (machine code, chemistry, or whatever) with some built-in statistical flexibility and, from that, can emerge all sorts of complex cognitive capacities. That said, I am open to the possibility that I am still misunderstanding the thrust of his methods and comments like I was before, but I think a lot of things would need to be cleared up for me before I could go back to thinking there is no epistemic projection involved.
Again, you are reading what's not there. The blue is not explicit at all in the sense you mean. And I think I have explained to a fair extent those "lots of things" that needed to be cleared up. Emergent problem solving simply means: the numbers in the algorithm solve the problem of the obstacles we threw into the system, and still complete the sorting task". And that's emergent, because the coder didn't expect that feedback property to happen within its system, less than less did he try to build that property within the system.
Again I think you are introducing rationalizations on behalf of ML that simply aren't there in his own reasoning. Obviously he wants to find practically useful results for biological applications, but that doesn't negate his explicit comments concerning this algorithm research. He says "I hypothesize that intelligence (agential behavior implementing problem-solving is present at the lowest levels in our Universe, not requiring brains or even life per se." Not only is he willing to ascribe a soul to the "mouse-trap" (perhaps not specifically that device, but inorganic systems), to practically define the mouse-trap's behavior as the essence of cognition (and he explicitly states he is drawing inspiration from animal behavioral science), but he is also willing to entertain that human cognition is a complexification of these same elemental behaviors - "This is crucial not only to understand the evolutionary origin of our own complex cognition...". In this sense, ML is quickly becoming the reductionist par excellence, vastly exceeding where standard materialist reductionists are willing to go with it.