Cell Intelligence in Physiological & Morphological Spaces

Any topics primarily focused on metaphysics can be discussed here, in a generally casual way, where conversations may take unexpected turns.
User avatar
Cleric
Posts: 1931
Joined: Thu Jan 14, 2021 9:40 pm

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by Cleric »

Federica wrote: Sat Nov 09, 2024 9:59 pm I don't think the bold is what the experiment entails. There is no f(f(x)). If the whole sorting algorithm is f(x) - that is, the whole process that leads from say 3,1,2,5,4 to 1,2,3,4,5, then that's the end of the experiment. There is no iteration. Then Levin just goes home, and the next day he throws the dice again, to get a new initial configuration, say 4,1,3,2,5, and runs the same f(x) again, on this new "x". There is no f(f(x)).
Here's a concrete example of what is meant by the recursion f(f(x). Let's take a very simple sorting technique (I'm not sure, I guess it would be a form of Bubble sort). To make it more algebraic we can concatenate the numbers into a single number, i.e. if we use your example we start with x=31254.

We define f(x) such that it traverses pairs of digits from left to right and swaps the first pair where the left digit of the pair is greater than the right. I'll color the pairs that will be swapped by the function.
f(31254) → 13254
f(13254) → 12354
f(12354) → 12345
f(12345) → 12345
...

We can consider that the number is sorted when f(x) = x (that is, there's nothing more to swap).

Now we can of course define a function that handles the whole process as you suggest. It might be sort(x). In human language, we can define it as "iterate f(x) → x, until f(x)=x and then return x".

No sorting algorithm, however, can sort a list in one atomic step. Of course, not all algorithms iterate in such an obvious way as above but nevertheless, they always need to transform the list through many repeating steps.

ML's paper investigates precisely this path that the number takes from its original state to the sorted state. It is about these intermediary iterations that he says that the result of an iteration may look more unsorted than the previous one, yet still lead to the final sorted target. It is this step that is dubbed 'delayed gratification'. A lot can be said here but it suffices to say that 'less sorted' and 'more sorted' depend on what kind of metric we use and that metric is entirely coupled with the sorting algorithm itself. My guess is that ML approaches the problem through the lens of something like bubble sort, where it is very easy to say what is more or less sorted. A metric can be seen simply as how many steps away we are from the final result. When, however, the different algotypes are mixed, when some cells are frozen, etc., we actually build a new hybrid sorting algorithm. Now if this algorithm is working at all, with each iteration it will get one step closer to the final sorted result. And here's the critical thing: this step may look less sorted as measured by the Bubble metric! And this is where the simple logical error lies. We judge the iterations with Bubble logic and say "Aha! Here the experiment steps into a number that is less sorted (yet as measured according to our Bubble metric!!!). Thus it somehow goes around the barrier, it is willing to temporarily go into a less sorted state but as if having the insight that it will later make it up." Yet, according to the hybrid algorithm that we have built and its own metric, there's no such 'going around a barrier'. Every step of the hybrid algorithm (assuming a working algorithm) gets us one step closer to the final result. Nothing more, nothing less. This is the hybrid metric that we should consider. According to this metric there's no going around but going step by step straight toward the target. "In the eyes" of this metric, every step is "more sorted" because it is one step closer to the final result. The hybrid algorithm doesn't 'know' or 'care' that its next step may seem less sorted 'in the eyes' of another metric (such as the Bubble's).

Of course, one may argue that this is precisely the point - that the hybrid algorithm represents a higher-order morphic space that follows the principle of least action from within its perspective, which from another perspective may seem like delayed gratification. This is probably what ML would respond in the face of the above.

Yet this is precisely where the whole insight about 'no privileged scale of causation' crumbles. In the above, we have a single plane of causation - the iron necessity of the algorithmic steps. From such a view, the different scales (in our case they are like synonyms of metrics) are nothing but analytical lenses through which we assess whether the system is following a geodesic (the path of least action, or the straightest path toward the end result of computation) in some specific metric. This however has no causal significance whatsoever.

Think of it in the following way.



This is a simple mechanical device that should take much of the 'magic' of modern computers away. Now imagine that by having a large enough board and enough parts, you can mechanically implement even some of the sorting algorithms. You can imagine that you do that without any understanding, as a child would arrange the parts simply by looks. Of course, the very vast majority of arrangements won't do anything useful but let's assume that you are lucky and you manage to put the parts in configurations that do sorting. It's even easier to imagine the handicaps here - for example, you can glue some of the parts (i.e. frozen/dead cells). But anyway. The main thing is that the movements of the marbles are very clearly understandable. They depend entirely on very basic intuition about 'rolling down', 'inclination of a surface', etc. Now ask yourself the question: at what point in your experiments (remember that you may not even know that the arrangements can be interpreted as doing something meaningful (like sorting)), you'll be justified to say "Aha! Now a different plane of causation is intervening in my experiment! The intuition of rolling marbles is not enough to comprehend what I see."?
User avatar
Federica
Posts: 2494
Joined: Sat May 14, 2022 2:30 pm
Location: Sweden

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by Federica »

Cleric wrote: Sun Nov 10, 2024 9:08 pm
Federica wrote: Sat Nov 09, 2024 9:59 pm I don't think the bold is what the experiment entails. There is no f(f(x)). If the whole sorting algorithm is f(x) - that is, the whole process that leads from say 3,1,2,5,4 to 1,2,3,4,5, then that's the end of the experiment. There is no iteration. Then Levin just goes home, and the next day he throws the dice again, to get a new initial configuration, say 4,1,3,2,5, and runs the same f(x) again, on this new "x". There is no f(f(x)).
Here's a concrete example of what is meant by the recursion f(f(x). Let's take a very simple sorting technique (I'm not sure, I guess it would be a form of Bubble sort). To make it more algebraic we can concatenate the numbers into a single number, i.e. if we use your example we start with x=31254.

We define f(x) such that it traverses pairs of digits from left to right and swaps the first pair where the left digit of the pair is greater than the right. I'll color the pairs that will be swapped by the function.
f(31254) → 13254
f(13254) → 12354
f(12354) → 12345
f(12345) → 12345
...

We can consider that the number is sorted when f(x) = x (that is, there's nothing more to swap).

Now we can of course define a function that handles the whole process as you suggest. It might be sort(x). In human language, we can define it as "iterate f(x) → x, until f(x)=x and then return x".

No sorting algorithm, however, can sort a list in one atomic step. Of course, not all algorithms iterate in such an obvious way as above but nevertheless, they always need to transform the list through many repeating steps.


Cleric,

I would like to first get the micro-question of f(f(x)) out of the way, to continue on reductionism in my next post.
Obviously, I know that no sorting algorithm can sort a list in one atomic step. Not knowing that would mean having no idea whatsoever what a sorting algorithm is, and it would mean not having read Levin’s blog, as well as lacking all even remote understanding of what the experiment is about, and definitely a whole additional lot of correlated unworthy attributes :-)

It's not me who suggested defining a function that "handles the whole process". You wrote (emphasis added):

"The whole sorting algorithms simulation can be symbolized at a higher level of abstraction as a function, let's say f."

If you propose to symbolize the whole simulation at a higher level of abstraction as f, what else can f be other than what you, in this last post, call "sort(x)"? Hence my remark that - if you call the whole simulation "f(x)" - then there can be no iteration. That is, there is no sort(sort(x)). Instead, there is sort(x1); sort (x2); sort (x3); and so on.
"On Earth the soul has a past, in the Cosmos it has a future. The seer must unite past and future into a true perception of the now." Dennis Klocek
User avatar
Cleric
Posts: 1931
Joined: Thu Jan 14, 2021 9:40 pm

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by Cleric »

Federica wrote: Mon Nov 11, 2024 7:08 am Cleric,

I would like to first get the micro-question of f(f(x)) out of the way, to continue on reductionism in my next post.
Obviously, I know that no sorting algorithm can sort a list in one atomic step. Not knowing that would mean having no idea whatsoever what a sorting algorithm is, and it would mean not having read Levin’s blog, as well as lacking all even remote understanding of what the experiment is about, and definitely a whole additional lot of correlated unworthy attributes :-)

It's not me who suggested defining a function that "handles the whole process". You wrote (emphasis added):

"The whole sorting algorithms simulation can be symbolized at a higher level of abstraction as a function, let's say f."

If you propose to symbolize the whole simulation at a higher level of abstraction as f, what else can f be other than what you, in this last post, call "sort(x)"? Hence my remark that - if you call the whole simulation "f(x)" - then there can be no iteration. That is, there is no sort(sort(x)). Instead, there is sort(x1); sort (x2); sort (x3); and so on.
I agree. The way I worded it is misleading. It should have been:
"A single step of the whole sorting algorithms simulation can be symbolized at a higher level of abstraction as a function, let's say f."
At the time of writing, in my mind the word 'whole' referred to the combined action of all algotypes and constraints, seen as one operation. The next sentences in my post make it very unambiguous that this is what I mean with f(x) (I even state that explicitly "Thus, f(x) is the applying of one step of the simulation"). But for the sake of precise language - yes, I agree that I should have worded that better.
User avatar
Federica
Posts: 2494
Joined: Sat May 14, 2022 2:30 pm
Location: Sweden

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by Federica »

Cleric wrote: Mon Nov 11, 2024 8:32 am
Federica wrote: Mon Nov 11, 2024 7:08 am Cleric,

I would like to first get the micro-question of f(f(x)) out of the way, to continue on reductionism in my next post.
Obviously, I know that no sorting algorithm can sort a list in one atomic step. Not knowing that would mean having no idea whatsoever what a sorting algorithm is, and it would mean not having read Levin’s blog, as well as lacking all even remote understanding of what the experiment is about, and definitely a whole additional lot of correlated unworthy attributes :-)

It's not me who suggested defining a function that "handles the whole process". You wrote (emphasis added):

"The whole sorting algorithms simulation can be symbolized at a higher level of abstraction as a function, let's say f."

If you propose to symbolize the whole simulation at a higher level of abstraction as f, what else can f be other than what you, in this last post, call "sort(x)"? Hence my remark that - if you call the whole simulation "f(x)" - then there can be no iteration. That is, there is no sort(sort(x)). Instead, there is sort(x1); sort (x2); sort (x3); and so on.
I agree. The way I worded it is misleading. It should have been:
"A single step of the whole sorting algorithms simulation can be symbolized at a higher level of abstraction as a function, let's say f."
At the time of writing, in my mind the word 'whole' referred to the combined action of all algotypes and constraints, seen as one operation. The next sentences in my post make it very unambiguous that this is what I mean with f(x) (I even state that explicitly "Thus, f(x) is the applying of one step of the simulation"). But for the sake of precise language - yes, I agree that I should have worded that better.

I see. But please notice - your next sentence only makes it unambiguous under the condition that "one step in the simulation" is understood as one permutation in one run of one algorithm, and not as a more high-level step in the simulation, such as for example: "Sorting algorithm execution with methodology 1 array 1", "Sorting algorithm execution with methodology 2 and array 1", etcetera. Arguably, this second reading could easily seem the logical one, precisely because the same word "simulation" was used just before in the context of "whole simulation at a higher level of abstraction".

I probably should also clarify that with my original remark about f(f(x)) not being there, I was not suggesting that you don't know this stuff. To me this is ridiculously obvious, but I prefer to say it because, after you supposed I didn't have the palest idea what I was talking about, aka respect for the dialogue, who knows what else you are supposing, and what is obvious only to me. Anyway, I'm glad this is out of the way now, since, in the context of my original remark, this was a minor sub-point, like a sub-item in point "d" in that post.
"On Earth the soul has a past, in the Cosmos it has a future. The seer must unite past and future into a true perception of the now." Dennis Klocek
User avatar
Cleric
Posts: 1931
Joined: Thu Jan 14, 2021 9:40 pm

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by Cleric »

Federica wrote: Mon Nov 11, 2024 11:33 am I see. But please notice - your next sentence only makes it unambiguous under the condition that "one step in the simulation" is understood as one permutation in one run of one algorithm, and not as a more high-level step in the simulation, such as for example: "Sorting algorithm execution with methodology 1 array 1", "Sorting algorithm execution with methodology 2 and array 1", etcetera. Arguably, this second reading could easily seem the logical one, precisely because the same word "simulation" was used just before in the context of "whole simulation at a higher level of abstraction".

I probably should also clarify that with my original remark about f(f(x)) not being there, I was not suggesting that you don't know this stuff. To me this is ridiculously obvious, but I prefer to say it because, after you supposed I didn't have the palest idea what I was talking about, aka respect for the dialogue, who knows what else you are supposing, and what is obvious only to me. Anyway, I'm glad this is out of the way now, since, in the context of my original remark, this was a minor sub-point, like a sub-item in point "d" in that post.
Federica, just to say that I didn't write anything to suggest personally to you that you don't understand the things. You have already shown that you navigate quite well within the landscape of mathematical intuitive movements. I just decided to expand the example with a concrete implementation of a sorting algorithm, in order to show where exactly the iteration occurs. You say:
Federica wrote: Sat Nov 09, 2024 9:59 pm He just observes that the aggregation value of the numbers-cells by algotype is strangely above average within the workings of f(x), not as a result of an iteration, and that's a hidden property the algorithm exhibits
My point was only to show that the iteration is precisely within the workings of the total (hybrid) sorting algorithm. Moving from the initial state toward the target sorted state is an iterative process, and it is in these intermediary states between iterations where the strange behaviors are sought.

But as you say, this is only a minor sub-point. The overarching problem is that this behavior is taken for having anything to do with goal-seeking at a different plane of causality.
User avatar
Cleric
Posts: 1931
Joined: Thu Jan 14, 2021 9:40 pm

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by Cleric »

Federica wrote: Sat Nov 09, 2024 9:59 pm b. However, functional reductionism is different from the reductionism you (Ashvin) were attributing to Levin - that he's continually tempted to conceive only bottom-up causality. There is a way to conceive continually feedbacking bottom-up and top-down interconnected causality that is still fully intellectualized. And I still think Levin is in this exact posture. He is entirely open to the idea of causative agency of the higher orders, while you both think that in the end he superstitiously attributes agency/causation only to the lower levels, bottum-up.
Just to make it clear: everything with ML goes quite well until he enters the domain of computation. This is why the topic we're now discussing only came into focus as he began looking into computation (and thus AI, since AI is nothing but computation which in principle can be performed even on a marble computer). This is where we claim that there's certain unresolved confusion which if not grasped clearly gradually undermines the original proper intuition of Natural planes of causality. In other words, I don't think that ML consciously seeks bottom-up causality - quite the contrary - his whole biological work is inspired by the possibility that there are other planes of causality. The point is that the more one tries to see 'unexpected' goal-directed behavior in complex iterative computations (CGOL, sorting algorithms, IFS, AI, etc.), the more one goes in a direction where ultimately he has to say "Well, if everything emerges entirely from the basic rules, why do I need other planes of causation at all?" Then we can surely speak of planes of abstraction, which group lower-level processes into greater mechanical unities that ease analysis, but it's incorrect to speak about any causal power of these unities. It is all ruled at the lowest level.
User avatar
AshvinP
Posts: 6368
Joined: Thu Jan 14, 2021 5:00 am
Location: USA

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by AshvinP »

Cleric wrote: Sun Nov 10, 2024 9:08 pm ML's paper investigates precisely this path that the number takes from its original state to the sorted state. It is about these intermediary iterations that he says that the result of an iteration may look more unsorted than the previous one, yet still lead to the final sorted target. It is this step that is dubbed 'delayed gratification'. A lot can be said here but it suffices to say that 'less sorted' and 'more sorted' depend on what kind of metric we use and that metric is entirely coupled with the sorting algorithm itself. My guess is that ML approaches the problem through the lens of something like bubble sort, where it is very easy to say what is more or less sorted. A metric can be seen simply as how many steps away we are from the final result. When, however, the different algotypes are mixed, when some cells are frozen, etc., we actually build a new hybrid sorting algorithm. Now if this algorithm is working at all, with each iteration it will get one step closer to the final sorted result. And here's the critical thing: this step may look less sorted as measured by the Bubble metric! And this is where the simple logical error lies. We judge the iterations with Bubble logic and say "Aha! Here the experiment steps into a number that is less sorted (yet as measured according to our Bubble metric!!!). Thus it somehow goes around the barrier, it is willing to temporarily go into a less sorted state but as if having the insight that it will later make it up." Yet, according to the hybrid algorithm that we have built and its own metric, there's no such 'going around a barrier'. Every step of the hybrid algorithm (assuming a working algorithm) gets us one step closer to the final result. Nothing more, nothing less. This is the hybrid metric that we should consider. According to this metric there's no going around but going step by step straight toward the target. "In the eyes" of this metric, every step is "more sorted" because it is one step closer to the final result. The hybrid algorithm doesn't 'know' or 'care' that its next step may seem less sorted 'in the eyes' of another metric (such as the Bubble's).

Of course, one may argue that this is precisely the point - that the hybrid algorithm represents a higher-order morphic space that follows the principle of least action from within its perspective, which from another perspective may seem like delayed gratification. This is probably what ML would respond in the face of the above.

Yet this is precisely where the whole insight about 'no privileged scale of causation' crumbles. In the above, we have a single plane of causation - the iron necessity of the algorithmic steps. From such a view, the different scales (in our case they are like synonyms of metrics) are nothing but analytical lenses through which we assess whether the system is following a geodesic (the path of least action, or the straightest path toward the end result of computation) in some specific metric. This however has no causal significance whatsoever.

Cleric,

Thanks for this helpful elaboration of the issue. May I suggest you post something like this on the blog article, perhaps including the comparison to the marble computer. I don't know how it could be clearer and I wonder how ML would think about the simple logical error. Even if he doesn't respond (and assuming he posts it), it could be helpful for others who are perusing the article.

The more I contemplate this logical error, the more pernicious it seems. It thoroughly reinforces the mind container perspective where the human intellect can not only use its familiar gestures to understand how cognition can emerge from low-level rules, but can take an active role in manipulating cognitive agency from the bottom-up. As we know, the easier the route to "understanding" these existential issues seems to be, the more likely it will attract people over time.
"They only can acquire the sacred power of self-intuition, who within themselves can interpret and understand the symbol... those only, who feel in their own spirits the same instinct, which impels the chrysalis of the horned fly to leave room in the involucrum for antennae yet to come."
User avatar
Federica
Posts: 2494
Joined: Sat May 14, 2022 2:30 pm
Location: Sweden

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by Federica »

Cleric wrote: Mon Nov 11, 2024 1:22 pm Just to make it clear: everything with ML goes quite well until he enters the domain of computation. This is why the topic we're now discussing only came into focus as he began looking into computation (and thus AI, since AI is nothing but computation which in principle can be performed even on a marble computer). This is where we claim that there's certain unresolved confusion which if not grasped clearly gradually undermines the original proper intuition of Natural planes of causality. In other words, I don't think that ML consciously seeks bottom-up causality - quite the contrary - his whole biological work is inspired by the possibility that there are other planes of causality. The point is that the more one tries to see 'unexpected' goal-directed behavior in complex iterative computations (CGOL, sorting algorithms, IFS, AI, etc.), the more one goes in a direction where ultimately he has to say "Well, if everything emerges entirely from the basic rules, why do I need other planes of causation at all?" Then we can surely speak of planes of abstraction, which group lower-level processes into greater mechanical unities that ease analysis, but it's incorrect to speak about any causal power of these unities. It is all ruled at the lowest level.


Ok, thanks. I was writing a larger argument, but now I will only limit it to the algorithm experiment.
Cleric wrote: Sun Nov 10, 2024 9:08 pm ML's paper investigates precisely this path that the number takes from its original state to the sorted state. It is about these intermediary iterations that he says that the result of an iteration may look more unsorted than the previous one, yet still lead to the final sorted target. It is this step that is dubbed 'delayed gratification'.

A lot can be said here but it suffices to say that 'less sorted' and 'more sorted' depend on what kind of metric we use and that metric is entirely coupled with the sorting algorithm itself. My guess is that ML approaches the problem through the lens of something like bubble sort, where it is very easy to say what is more or less sorted.
A metric can be seen simply as how many steps away we are from the final result. When, however, the different algotypes are mixed, when some cells are frozen, etc., we actually build a new hybrid sorting algorithm. Now if this algorithm is working at all, with each iteration it will get one step closer to the final sorted result. And here's the critical thing: this step may look less sorted as measured by the Bubble metric! And this is where the simple logical error lies. We judge the iterations with Bubble logic and say "Aha! Here the experiment steps into a number that is less sorted (yet as measured according to our Bubble metric!!!). Thus it somehow goes around the barrier, it is willing to temporarily go into a less sorted state but as if having the insight that it will later make it up." Yet, according to the hybrid algorithm that we have built and its own metric, there's no such 'going around a barrier'. Every step of the hybrid algorithm (assuming a working algorithm) gets us one step closer to the final result. Nothing more, nothing less. This is the hybrid metric that we should consider. According to this metric there's no going around but going step by step straight toward the target. "In the eyes" of this metric, every step is "more sorted" because it is one step closer to the final result. The hybrid algorithm doesn't 'know' or 'care' that its next step may seem less sorted 'in the eyes' of another metric (such as the Bubble's).

What you say makes sense. However, in this paper at least, Levin is not so naive as to use the bubble metric, or any other algorithm-dependent metric, to measure sortedness/unsortedness. I have bolded the critical junctures.

Levin uses a variety of metrics. Specifically, he defines DG as a temporary increase in monotonicity error that associates with a subsequent increase in sortedness (concomitantly with frozen cells). He further defines sortedness as the percentage of cells that strictly follow the final sorted array, that is, as a measure of the degree of sequentiality of the array at any step. I guess this metric for DG doesn't suffer from the logical error you point to? Because it lets sortedness be defined in terms of the end state - and experimentally, all algorithms work - not in terms of the specific algorithm/hybrid algorithm by which the system traverses the algorithmic space, to get there. And there is another crucial thing. As I said, Levin distributes the execution of the various algorithms on the cells themselves. That is, the cell has an individual view of its neighbors (the exact extent of which naturally depends on the particular algotype that characterizes the particular cell) and it executes its algorithm accordingly. And each cell has a chance to move at each step. Under this novel feature, the notion of iron necessity as well as the resulting sorting dynamic is different compared to a classical flow guided by a higher-order ghost/coder. So, not only the metric is not dependent on the algorithm, but also every cell moves independently, and without knowing (in the code) about the algotype of its neighbors, so the higher-order ghost pushed out of the door is not getting back in through the widow. In other words, there's no centralized, single-plane, algorithm implemented downward from a higher-order ghost who cares / doesn't care (see PS)

Of course, one may argue that this is precisely the point - that the hybrid algorithm represents a higher-order morphic space that follows the principle of least action from within its perspective, which from another perspective may seem like delayed gratification. This is probably what ML would respond in the face of the above.

Yet this is precisely where the whole insight about 'no privileged scale of causation' crumbles. In the above, we have a single plane of causation - the iron necessity of the algorithmic steps. From such a view, the different scales (in our case they are like synonyms of metrics) are nothing but analytical lenses through which we assess whether the system is following a geodesic (the path of least action, or the straightest path toward the end result of computation) in some specific metric. This however has no causal significance whatsoever.

Think of it in the following way.

Clearly, Levin would not respond so in the face of the above, because the hybrid algorithm is run in cell-view mode, thus it's not a representation of a one higher-order perspective. So the Turing machine example - as much as it makes it easier to get the iron necessity compared to an abstract sorting thought, because it stimulates sense perception - doesn’t seem to fit here? Since Levin’s sorting experiment is 'decentralized'. The "complex iterative computation" has been rewired, and it's no longer the model you are referring to, which implies an external coder, conceiving and executing the permutations 'from outside' the algorithmic space. For these reasons, as much as I am deeply critical to ML's research, it seems to me that this algorithmic simulation is structured with the successful intention to preserve the insight of no privileged scale of causation (as far as an intellectual model can go) insight which allows to model the "emergent" features as bottom up feedback. Do you agree?

--------------------------------------

PS: Maybe I should add this caveat: I understand that Levin flattens every morphological space on the intellectual plane. The realization of the disturbing consequences of such an attitude in his activity was the very point I updated this thread with. So, in this sense, that single plane of causation does persist. He clearly reduces all planes to mental images confined to his own intellectual space. However, my point in the latest pages here has been that, as far as a natural scientist can go towards a non reductionist approach to the understanding of life, while also lacking a spiritual-scientific grasping of reality extending beyond standard cognition, that far Levin goes.
"On Earth the soul has a past, in the Cosmos it has a future. The seer must unite past and future into a true perception of the now." Dennis Klocek
User avatar
Federica
Posts: 2494
Joined: Sat May 14, 2022 2:30 pm
Location: Sweden

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by Federica »

AshvinP wrote: Sun Nov 10, 2024 2:34 pm ...
Ashvin I wonder if this is mostly an emotional reaction, since you went from "being open to the possibility" of considering my contentions, to being fierily opposite after I argumented them further, also showing signs you didn't read what I wrote (like for example when you say you are curious what the hidden properties may be for me, while I explicitly laid them out more than once). Probably you didn't like my direct style, which I understand. I surely agree it doesn't match your own well.
"On Earth the soul has a past, in the Cosmos it has a future. The seer must unite past and future into a true perception of the now." Dennis Klocek
User avatar
AshvinP
Posts: 6368
Joined: Thu Jan 14, 2021 5:00 am
Location: USA

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by AshvinP »

Federica wrote: Mon Nov 11, 2024 3:30 pm
AshvinP wrote: Sun Nov 10, 2024 2:34 pm ...
Ashvin I wonder if this is mostly an emotional reaction, since you went from "being open to the possibility" of considering my contentions, to being fierily opposite after I augmented them further, also showing signs you didn't read what I wrote (like for example when you say you are curious what the hidden properties may be for me, while I explicitly laid them out more than once). Probably you didn't like my direct style, which I understand. I surely agree it doesn't match your own well.

No, it just became more evident to me what your contentions really were, as before I was giving you the benefit of the doubt of understanding ML's experiment and reasoning before offering your 100% confident opinions ("nononono", "never", "inaccurate f(f(x))", etc.) on our blog posts and why ML reacted the way he did. Now it has become clear you were not accurately discerning the nature of the algorithm experiment (the intermediary iterations), the logical error in ML's conclusions, and therefore why it could only result from a superstitious (or reductionist) habit of thinking. As evident from your last post, you are still trying to defend his conclusion of 'emergent novel properties'.

That in itself is fine, if you are simply trying to think through the various inner movements involved in ML's approach. But, from this latest post to me, it is clear that you have attached yourself so much to these opinions that you were, and continue to, find rationalizations for them instead of trying to understand what Cleric or I am conveying. You can't see that my question about how you think of the 'hidden properties' was precisely aimed at helping you to understand there is a logical error involved in ML's "insights", which it wasn't clear to me that you were spotting. And it has become unsurprising at this point that you continue to do these things and project "emotional reactions" onto me - that's just par for the course with you. Under no circumstances do you want to see yourself in the light of forming rushed opinions and being mistaken.
"They only can acquire the sacred power of self-intuition, who within themselves can interpret and understand the symbol... those only, who feel in their own spirits the same instinct, which impels the chrysalis of the horned fly to leave room in the involucrum for antennae yet to come."
Post Reply