Cell Intelligence in Physiological & Morphological Spaces

Any topics primarily focused on metaphysics can be discussed here, in a generally casual way, where conversations may take unexpected turns.
User avatar
Federica
Posts: 2492
Joined: Sat May 14, 2022 2:30 pm
Location: Sweden

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by Federica »

AshvinP wrote: Mon Nov 11, 2024 3:48 pm Now it has become clear you were not accurately discerning the nature of the algorithm experiment (the intermediary iterations), the logical error in ML's conclusions
You are incredible, Ashvin. The opposite of what you say has become clear.
And your comment was indeed inaccurate. Leaving aside the f(x)) (since it's been clarified above) let's remember you asked Levin: "In this sense, can we say the fact that the two algotypes seem to ‘group together’ is any different than looking at an IFS fractal?"
Someone who has descerned the nature of the experiment wouldn't ask a question containing a wrong fact (two algotypes don't seem to group together in the experiment. Cells group together according to their shared algotype, hence the algotypes remain rather separate).
However feel free to take the fact that LM didn't correct you on that, as a demonstration that you were right. That would only show your attachment to your points, because the fact that algotypes don't group is undisputable in the experiment.
"On Earth the soul has a past, in the Cosmos it has a future. The seer must unite past and future into a true perception of the now." Dennis Klocek
User avatar
AshvinP
Posts: 6367
Joined: Thu Jan 14, 2021 5:00 am
Location: USA

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by AshvinP »

Federica wrote: Mon Nov 11, 2024 4:36 pm
AshvinP wrote: Mon Nov 11, 2024 3:48 pm Now it has become clear you were not accurately discerning the nature of the algorithm experiment (the intermediary iterations), the logical error in ML's conclusions
You are incredible, Ashvin. The opposite of what you say has become clear.
And your comment was indeed inaccurate. Leaving aside the f(x)) (since it's been clarified above) let's remember you also asked Levin: "In this sense, can we say the fact that the two algotypes seem to ‘group together’ is any different than looking at an IFS fractal?"
Someone who has descerned the nature of the experiment wouldn't ask a question containing a wrong fact (two algotypes don't seem to group together in the experiment. Cells group together according to their shared algotype).
However feel free to take the fact that LM didn't correct you on that, as a demonstration that you were right. Thta would only show your attachment to your points, bevause the fact that algotypes don't group is an undisputable result of the experiment.

Ok, Federica, I will take a step back from discussion again.

I just hope you are actually interested in discerning the truthful flow of inner gestures involved in these computational experiments, even if that means facing the fact you were mistaken (and you won't ever discern that flow until you make that confession). If you are interested and can make that confession, then further discussion with Cleric will help refine your orientation to what's going on.

As Cleric said, it is clear to me that you have the potential to traverse the inner landscape in an intuitive way, and you have done so plenty of times before. But just as if I attach myself to unhealthy eating habits (and I often do) this will influence my soul state and the possibilities of my thinking, when you attach yourself to this argumentative/contrarian curvature and the resulting opinions, it makes your thinking quite flat and discursive. You start picking apart every sentence and every word to find a rational basis for your opinions, instead of trying to resonate with the holistic picture that emerges. That's why I need to step back, because you simply cannot resist the impulse on your own and I don't want to see your thinking movements continue to suffer in this way.
"They only can acquire the sacred power of self-intuition, who within themselves can interpret and understand the symbol... those only, who feel in their own spirits the same instinct, which impels the chrysalis of the horned fly to leave room in the involucrum for antennae yet to come."
User avatar
Cleric
Posts: 1931
Joined: Thu Jan 14, 2021 9:40 pm

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by Cleric »

Federica wrote: Mon Nov 11, 2024 2:54 pm Clearly, Levin would not respond so in the face of the above, because the hybrid algorithm is run in cell-view mode, thus it's not a representation of a one higher-order perspective. So the Turing machine example - as much as it makes it easier to get the iron necessity compared to an abstract sorting thought, because it stimulates sense perception - doesn’t seem to fit here? Since Levin’s sorting experiment is 'decentralized'. The "complex iterative computation" has been rewired, and it's no longer the model you are referring to, which implies an external coder, conceiving and executing the permutations 'from outside' the algorithmic space. For these reasons, as much as I am deeply critical to ML's research, it seems to me that this algorithmic simulation is structured with the successful intention to preserve the insight of no privileged scale of causation, insight which allows to model the "emergent" features as bottom up feedback. Do you agree?
Federica, there's a misunderstanding here. I'll have to think harder about an elegant and simple example that captures the problem.

For the time being, think of the IFS. There you know that we iterate the image according to the L frames. Each L frame is like an affine transformation function (scale, rotate, skew, translate)
f(x) = L1(x) + L2(x) + ...
depending on how many L frames we have. (the web IFS tool also leaves something of the previous image, so it would be more like f(x) = k*x + L1(x) + L2(x) + ..., where k is some dimming factor, say 0.1, which leaves 10% of the previous image and adds the new transforms on top, but we can disregard this here). When we start with the initial image x, ...(f(f(f(f(f..(x))))) is our iterated function system.

It is possible to produce so-called hybrid fractal images by using two or more sets of L frames. For example, if we use two different sets (let's call them f and g), we can transform the initial image once with the first, then twice with the second, then repeat: ..(..(..(g(g(f(g(g(f(x)))))))))
We can choose any pattern we want or as many functions as we want. We can even choose the next function to iterate with through some random generator. This results in interesting images that seem to combine features of both fractals. Most of the interesting artistic 3D fractals are produced through such hybrid functions:

Image

Since it is a fractal image, it is very interesting how organic it looks. It's not simply having one half as the one fractal while the other as the second, but they are organically combined - the forms of f seem to be made of the forms of g, which in turn are made of f-forms, etc.

The key point, however, is that this whole hybrid function is still one algorithm, a single Turing machine. The same holds for the sorting simulation. The algotypes are not floating independently in free computational space, they are simply functions that are applied from within a greater hybrid function. The latter is still a valid Turing machine. It can be implemented in the marble computer.

To put it bluntly, all talks about bottom-up, top-down coders, independent cells, and so on, only breed confusion and obscure the simple fact that after all, everything is still a Turing machine, marble or otherwise.

I don't want to assert this fact just like that, and that's why I invited you to think about, at what point a marble machine can no longer implement the simulation of 'individual cells' following their algotypes. If you suggest that such a marble machine is impossible then how did ML implement the simulation? Didn't he write a program that applies the different algotypes according to cell type? By doing this, doesn't he practically write a new kind of hybrid algorithm that simply uses the algotypes as sub-routines? The confusion here can come only when we imagine that the sub-routines of this hybrid algorithm are somehow not part of the total Turing machine. So the following:
Federica wrote: Mon Nov 11, 2024 2:54 pm So, not only the metric is not dependent on the algorithm, but also every cell moves independently, and without knowing about the algotype of its neighbors, so the higher-order ghost pushed out of the door is not getting back in through the widow. In other words, there's no centralized, single-plane, algorithm implemented downward from a higher-order ghost who cares / doesn't care.
is incorrect because there is a central algorithm - it's the total simulation program that ML wrote! Just because we don't know beforehand how this new algorithm will behave when run, doesn't mean that the whole program is not a single Turing machine. If we decouple from mystical feelings, the total simulation is not that different from the hybrid IFS which transforms the state according to more complicated rules.
User avatar
Cleric
Posts: 1931
Joined: Thu Jan 14, 2021 9:40 pm

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by Cleric »

AshvinP wrote: Mon Nov 11, 2024 2:05 pm Cleric,

Thanks for this helpful elaboration of the issue. May I suggest you post something like this on the blog article, perhaps including the comparison to the marble computer. I don't know how it could be clearer and I wonder how ML would think about the simple logical error. Even if he doesn't respond (and assuming he posts it), it could be helpful for others who are perusing the article.

The more I contemplate this logical error, the more pernicious it seems. It thoroughly reinforces the mind container perspective where the human intellect can not only use its familiar gestures to understand how cognition can emerge from low-level rules, but can take an active role in manipulating cognitive agency from the bottom-up. As we know, the easier the route to "understanding" these existential issues seems to be, the more likely it will attract people over time.
I'll have to think harder on this. The thing is that these things are so deeply intermingled that at this point it seems to me that anything I could write will be seen only as a superficial attack on a certain aspect of the experiment. Let's first see if these things will make sense to Federica.
User avatar
Federica
Posts: 2492
Joined: Sat May 14, 2022 2:30 pm
Location: Sweden

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by Federica »

Cleric wrote: Mon Nov 11, 2024 4:54 pm
Federica wrote: Mon Nov 11, 2024 2:54 pm Clearly, Levin would not respond so in the face of the above, because the hybrid algorithm is run in cell-view mode, thus it's not a representation of a one higher-order perspective. So the Turing machine example - as much as it makes it easier to get the iron necessity compared to an abstract sorting thought, because it stimulates sense perception - doesn’t seem to fit here? Since Levin’s sorting experiment is 'decentralized'. The "complex iterative computation" has been rewired, and it's no longer the model you are referring to, which implies an external coder, conceiving and executing the permutations 'from outside' the algorithmic space. For these reasons, as much as I am deeply critical to ML's research, it seems to me that this algorithmic simulation is structured with the successful intention to preserve the insight of no privileged scale of causation, insight which allows to model the "emergent" features as bottom up feedback. Do you agree?
Federica, there's a misunderstanding here. I'll have to think harder about an elegant and simple example that captures the problem.

For the time being, think of the IFS. There you know that we iterate the image according to the L frames. Each set of L frames is like an affine transformation function (scale, rotate, skew, translate)
f(x) = L1(x) + L2(x) + ...
depending on how many L frames we have. (the web IFS tool also leaves something of the previous image, so it would be more like f(x) = k*x + L1(x) + L2(x) + ..., where k is some dimming factor, say 0.1, which leaves 10% of the previous image and adds the new transforms on top, but we can disregard this here). When we start with the initial image x, ...(f(f(f(f(f..(x))))) is our iterated function system.

It is possible to produce so-called hybrid fractal images by using two or more sets of L frames. For example, if we use two different sets (let's call them f and g), we can transform the initial image once with the first, then twice with the second, then repeat: ..(..(..(g(g(f(g(g(f(x)))))))))
We can choose any pattern we want or as many functions as we want. We can even choose the next function to iterate with through some random generator. This results in interesting images that seem to combine features of both fractals. Most of the interesting artistic 3D fractals are produced through such hybrid functions:

Image

Since it is a fractal image, it is very interesting how organic it looks. It's not simply having one half as the one fractal while the other as the second, but they are organically combined - the forms of f seem to be made of the forms of g, which in turn are made of f-forms, etc.

The key point, however, is that this whole hybrid function is still one algorithm, a single Turing machine. The same holds for the sorting simulation. The algotypes are not floating independently in free computational space, they are simply functions that are applied from within a greater hybrid function. The latter is still a valid Turing machine. It can be implemented in the marble computer.

To put it bluntly, all talks about bottom-up, top-down coders, independent cells, and so on, only breed confusion and obscure the simple fact that after all, everything is still a Turing machine, marble or otherwise.

I don't want to assert this fact just like that, and that's why invited you to think about, at what point a marble machine can no longer implement the simulation of 'individual cells' following their algotypes. If you suggest that such a marble machine is impossible then how did ML implement the simulation? Didn't he write a program that applies the different algotypes according to cell type? By doing this, doesn't he practically write a new kind of hybrid algorithm that simply uses the algotypes as sub-routines? The confusion here can come only when we imagine that the sub-routines of this hybrid algorithm are somehow not part of the total Turing machine. So the following:
Federica wrote: Mon Nov 11, 2024 2:54 pm So, not only the metric is not dependent on the algorithm, but also every cell moves independently, and without knowing about the algotype of its neighbors, so the higher-order ghost pushed out of the door is not getting back in through the widow. In other words, there's no centralized, single-plane, algorithm implemented downward from a higher-order ghost who cares / doesn't care.
is incorrect because there is a central algorithm - it's the total simulation program that ML wrote! Just because we don't know beforehand how this new algorithm will behave when run, doesn't mean that the whole program is not a single Turing machine. If we decouple from mystical feelings, the total simulation is not that different from the hybrid IFS which transforms the state according to more complicated rules.

Thank you Cleric. You don't have to call it a misunderstanding.
"There is a central algorithm - it's the total simulation program that ML wrote!" Yes, I realize that, I edited my post in this sense, added just that, that this is still within the framework of an intellectual model. So in other words it means that, to the extent that ML is not a spiritual scientist, there is no way for him to create any model that is consistent with the insight of no privileged scale of causation (granted that his own conceiving the model is in the blind spot). But then I wonder why you went into the details of the mathematical properties, the bubble-based metrics etcetera. But I'll try to think about it. Thanks for providing all the insights.
"On Earth the soul has a past, in the Cosmos it has a future. The seer must unite past and future into a true perception of the now." Dennis Klocek
User avatar
Cleric
Posts: 1931
Joined: Thu Jan 14, 2021 9:40 pm

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by Cleric »

Federica wrote: Mon Nov 11, 2024 5:26 pm But then I wonder why you went into the details of the mathematical properties, the bubble-based metrics etcetera.
Simply because only in this way we can rigorously locate the 'windows' through which we introduce ghosts, where there's no place for such. If such things were recognized ML wouldn't imagine that the hybrid algorithm is circumventing potential barriers (which are seen as such only from his preferred metric) and thus seeing an archetype of delayed gratification at work.

He is fully right that this is how true spiritual activity works and how we can push through unpleasant sensations because of higher-order insight and what awaits us on the other end. But it is simply incorrect to attribute the same to a flat marble machine. That's all.
User avatar
AshvinP
Posts: 6367
Joined: Thu Jan 14, 2021 5:00 am
Location: USA

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by AshvinP »

Cleric wrote: Mon Nov 11, 2024 5:21 pm
AshvinP wrote: Mon Nov 11, 2024 2:05 pm Cleric,

Thanks for this helpful elaboration of the issue. May I suggest you post something like this on the blog article, perhaps including the comparison to the marble computer. I don't know how it could be clearer and I wonder how ML would think about the simple logical error. Even if he doesn't respond (and assuming he posts it), it could be helpful for others who are perusing the article.

The more I contemplate this logical error, the more pernicious it seems. It thoroughly reinforces the mind container perspective where the human intellect can not only use its familiar gestures to understand how cognition can emerge from low-level rules, but can take an active role in manipulating cognitive agency from the bottom-up. As we know, the easier the route to "understanding" these existential issues seems to be, the more likely it will attract people over time.
I'll have to think harder on this. The thing is that these things are so deeply intermingled that at this point it seems to me that anything I could write will be seen only as a superficial attack on a certain aspect of the experiment. Let's first see if these things will make sense to Federica.

True, the errors that seem 'obvious' to us can become a whole different story when the first-person perspective doing the experiment and analyzing the results remains in the blind spot. That seems to be essentially what happened with ML in this case - he tried to isolate the human perspective from the algorithmic dynamics as much as possible, but forgot that in the process of doing so, he was implementing another hybrid algorithm that progressed with as much top-down necessity, from 'its own perspective', as the sub-routines. It's not at all easy to call attention to this central issue when a person becomes heavily invested in not recognizing it, since the latter allows the results to confirm underlying feelings and support underlying goals (which of course would be rendered more transparent if the real-time perspective was brought out of the blind spot - Catch 22).

On the other hand, BK spotted this 'epistemic projection' in real-time discussion, perhaps because of his familiarity with computational dynamics and extensive contemplation of the superstitions surrounding AI. Unfortunately he could not elaborate much more on it and it came off as a somewhat prejudiced critique, just another competing interpretation of the results. Maybe it would make more sense to post on BK's blog and hope that he brings it up again to ML in a future discussion :)
"They only can acquire the sacred power of self-intuition, who within themselves can interpret and understand the symbol... those only, who feel in their own spirits the same instinct, which impels the chrysalis of the horned fly to leave room in the involucrum for antennae yet to come."
User avatar
Federica
Posts: 2492
Joined: Sat May 14, 2022 2:30 pm
Location: Sweden

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by Federica »

Cleric wrote: Mon Nov 11, 2024 5:39 pm
Federica wrote: Mon Nov 11, 2024 5:26 pm But then I wonder why you went into the details of the mathematical properties, the bubble-based metrics etcetera.
Simply because only in this way we can rigorously locate the 'windows' through which we introduce ghosts, where there's no place for such. If such things were recognized ML wouldn't imagine that the hybrid algorithm is circumventing potential barriers (which are seen as such only from his preferred metric) and thus seeing an archetype of delayed gratification at work.

He is fully right that this is how true spiritual activity works and how we can push through unpleasant sensations because of higher-order insight and what awaits us on the other end. But it is simply incorrect to attribute the same to a flat marble machine. That's all.

Do you think there is any possiblility that the idea of DG specifically could help ML obtain opportunistic material breakthroughs (to exagerate for clarity: like cracking the code of how to regrow human anatomy)? I guess your answer should be no, otherwise it would mean that the algorithmic model with this feature would be (in the blind, but still) indirectly consistent with the NPSOC insight?
"On Earth the soul has a past, in the Cosmos it has a future. The seer must unite past and future into a true perception of the now." Dennis Klocek
User avatar
Cleric
Posts: 1931
Joined: Thu Jan 14, 2021 9:40 pm

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by Cleric »

AshvinP wrote: Mon Nov 11, 2024 5:54 pm On the other hand, BK spotted this 'epistemic projection' in real-time discussion, perhaps because of his familiarity with computational dynamics and extensive contemplation of the superstitions surrounding AI. Unfortunately he could not elaborate much more on it and it came off as a somewhat prejudiced critique, just another competing interpretation of the results. Maybe it would make more sense to post on BK's blog and hope that he brings it up again to ML in a future discussion :)
Right, BK's extensive experience with computers has probed the intuitive landscape of computational gestures much more thoroughly, and thus it is transparent to him when someone else tries to introduce ghosts in the gaps that they have not yet explored sufficiently. What worries me is that showing the error still doesn't tackle the problem of what to do afterward. When it comes to the real-time thinking activity, both BK and ML would be inclined to only model the 'real' processes on the other side that feel like thinking on the inner side. This is the real trouble :)
User avatar
AshvinP
Posts: 6367
Joined: Thu Jan 14, 2021 5:00 am
Location: USA

Re: Cell Intelligence in Physiological & Morphological Spaces

Post by AshvinP »

Cleric wrote: Mon Nov 11, 2024 7:07 pm
AshvinP wrote: Mon Nov 11, 2024 5:54 pm On the other hand, BK spotted this 'epistemic projection' in real-time discussion, perhaps because of his familiarity with computational dynamics and extensive contemplation of the superstitions surrounding AI. Unfortunately he could not elaborate much more on it and it came off as a somewhat prejudiced critique, just another competing interpretation of the results. Maybe it would make more sense to post on BK's blog and hope that he brings it up again to ML in a future discussion :)
Right, BK's extensive experience with computers has probed the intuitive landscape of computational gestures much more thoroughly, and thus it is transparent to him when someone else tries to introduce ghosts in the gaps that they have not yet explored sufficiently. What worries me is that showing the error still doesn't tackle the problem of what to do afterward. When it comes to the real-time thinking activity, both BK and ML would be inclined to only model the 'real' processes on the other side that feel like thinking on the inner side. This is the real trouble :)

In my own experience, coming across the famous 'present activity' description of GA 4 was one of the foundational experiences for realizing, up until then, I had only been interested in f (drawing ((hands drawing triangles) drawing hands drawing triangles)) etc., and had failed to intimate there could be another axis of investigating the inner side of the drawing function. Of course it took much longer to continue refining this feeling through all of our explorations on the forum, making it less abstract and more an intimate experience that characterizes the real flow of daily thinking life.

Sometimes I wonder if realizing certain logical errors like this one could serve as a 'present activity' intimation for someone like ML. He has already invested a good deal into the idea that this algorithm experiment has explored radically new territory, unveiling new functional capacities that are always latent in the computational gaps. Suddenly realizing it is all based on a simple blind spot and corresponding error is a big dose of humility, as one gets the opportunity to view one's past spiritual activity in a whole new light. ML experiences something like, "I had been frankly shocked by these experimental results, and everyone on my team seemed to also be shocked and support the conclusions, but it turns out there was a simple error hiding in plain sight, related to my own thinking participation in the process, that demystifies the whole thing... what else have I been missing??" It could be a basic prompting to turn a bit more inward in these explorations.

But yes, that wouldn't go anywhere by itself if he doesn't follow up with some other hints about where/how to start 'looking' for the real-time thinking activity. As you initially remarked, the length factor is a big issue. Pointing out the error would need to be streamlined as much as possible, and then just a few brief hints dropped in at the end about a new a way of exploring what it means for cognitive perspectives to be genuinely active from higher-planes of causality, which we can certainly experience but cannot encompass/model with our familiar intellectual gestures. You already presented such a way when mentioning how we need a deeper inner scale to modulate habits, passions, etc., but I doubt he made it to that part.
"They only can acquire the sacred power of self-intuition, who within themselves can interpret and understand the symbol... those only, who feel in their own spirits the same instinct, which impels the chrysalis of the horned fly to leave room in the involucrum for antennae yet to come."
Post Reply