AshvinP wrote: ↑Thu Sep 12, 2024 11:51 pm
I don't know if this discussion was shared before, but it's fascinating. I am halfway through and essentially it has shaped into a sort of debate between the spiritual-phenomenological idealistic perspective (Levin) and the analytic idealist perspective (BK). BK takes issue with Levin because he is making 'epistemic projections' of higher-order cognitive functions, i.e. teleological agency, onto simple living organisms, which BK basically conceives as instinctive macro-programs within MAL. The criticism begins around 26 min. Levin then launches into a series of penetrating insights that, frankly, I think sail right beyond BK because the first-person cognitive perspective is thoroughly in the blind spot. Levin points out that what BK is calling 'epistemic projection' is what is always happening because "
everything is a perspective of some agent, everything" and projection of agentic qualities is therefore another way of speaking about how agentic relative perspectives interface with one another.
BK even roots his criticism in CGOL and the fact that simple mechanical rules can give the appearance of complex functioning systems but to attribute such systems with agency or goals would be 'epistemic projection'. He then tries to apply that across the board to the goal-directed behavior of living organisms. It shows how the depth gradient simply isn't suspected by BK, which is something that Levin also mentions, that everything is on a spectrum. I think Levin also intuits that there may be some connection between lower elemental cognitive perspectives and potential higher-than-human cognitive perspectives with much more temporally extended 'light cones', of which the elemental perspectives are reflections, but it remains nebulous and not something he can speak to directly through his empirical research.
Overall, it is a fascinating case study of how, an intuitive thinker starting from a strictly phenomenological and even 'materialist' perspective, or at least a perspective rooted in the transformations of perceptual phenomena through experimentation, can reach the insight of reality being comprised solely of 'competing and cooperating agential perspectives', while an analytic thinker starting from a metaphysical and idealist perspective can gradually occupy the position of the materialist reductionist, waving off all insights rooted in disciplined and assumption-free empirical investigation as "epistemic projection" simply to preserve a rigid metaphysical position. BK even says plainly he is an "extreme reductionist", trying to "reduce the complex to the simple".
With that said, #2 in this series was more concerning because it showed that neither Levin nor BK see any ethical issues with plunging forward into regenerative medicine and transhumanist technologies. Some of the stuff he mentions and applauds in terms of body modeling, purely based on the gratification of immediate desires, is downright scary to think about. This is why someone like Levin desperately needs to encounter the perspective of esoteric science and the principle of 'as below, so above', because otherwise, he has no reason to suspect there are better, more organic, and harmonious inner ways to address the illness and suffering in the World. Instead, he will attain remarkable clinical results within a matter of a few years and then there will be absolutely no incentive to pause and reconsider what's at stake.
Thank you, Asvhin!
Great talk indeed. ML's elaboration on the Platonic processes was new to me. I hope he can hold on to that direction.
Alas, it is clear that there's something still missing in the comprehension of spiritual activity. I've gone through his article on the sorting algorithms. To be honest, BK's objections were not without warrant (the objections concretely to the sorting algorithms work, not in general).
If ML doesn't recognize what he's doing in that case, the door is open for a very pernicious kind of
superstition. The whole sorting algorithms simulation can be symbolized at a higher level of abstraction as a function, let's say
f. The initial distribution of the numbers can be represented as
x. Thus,
f(x) is the applying of one step of the simulation. Then we take the result and apply the simulation step again, thus we have
f(f(x)). This is a simple iterative function system IFS. Here we have seen many times how such repeatedly applied functions exhibit certain attractors. There's nothing mystical about this, it's not that different from the fact that
1/x tends to zero as we increase
x. It's simply the quantitative behavior of the expression. Functions are mappings. They map x -> y. When the mappings are not linear, it's fully possible that certain x-es land more closely together in y-space, others further apart. Those that land closer together we say are 'attracted'. Of course, it would be misleading if we imagine that some forces or strings pull the points together.
In this sense, the fact that the two algotypes seem to 'group together' is no different than looking at an IFS fractal and recognizing that the video feedback accumulates in certain positions, depending on the positions of the L frames. So BK is right to call this out. Suggesting that these groupings are exhibits of basal cognition, delayed gratification, etc. is really a very superstitious thing to suggest.
And to an extent this is understandable. ML is divided between his higher intuitions and his classical habits of a behavioral scientist. We can often observe that. Even though he speaks of minds and cognition, most of the time he is in a purely behaviorist mode (if it quacks like a duck ...). In other places, when he speaks of the first-person perspectives, things go very nicely. So there's a clear sign here of the hysteresis process/bistable condition that hasn't yet found its resolution.
I think it was a talk on Curt Jaimungal's TOE channel where I heard for the first time ML use that CGOL analogy. CJ tried to push more in that direction but ML was quite vague. It struck me that here things didn't reach the crux of the matter either, even though at one time BK mentioned weak and strong emergence. Both on CJ's show and here, ML speaks as if indeed at the lowest level we have only the basic rules of the cellular automata. In true CA (as CGOL), the rules don't 'know' or 'care' whether their cells are in the shape of random noise, a glider, or a Turing Machine. Here I completely agree with BK, that in this particular case, the higher-order shapes in CGOL have significance and meaning only for
our own cognition. ML tried to point out that for the structures it makes a difference whether they see themselves at a higher level of abstraction or not, yet he agrees that in the end the fundamental rules are all the same and at the lowest level. This is the point that really hurts me and which it seems ML consistently overlooks.
It is so easy to get this point straight, and it really saddens me that neither CJ, nor BK led the conversation to the crux of the matter. ML simply needs to come clean about the fact whether a higher-order perspective can will its transformations in novel ways informed precisely from this higher-order view of the World flow. If this higher-order perspective is nothing but a passive view of the CA World flow, where everything is still propagated on the lowest level by the basic rules, then the whole thing about higher-order structures is irrelevant. If I have God-level consciousness of the World flow but this flow is fully determined by a handful of rules at the lowest level, what difference does it make? We can speak of passive consciousness in this case but no real agency that originates at that level. Any perceived agency would be only the playback of an illusion resulting from the fundamental rules (this is the basic reductionist attack on free will). It would make a difference only if my higher-order view also gives me a correspondingly novel leverage point through which I can will the transformations of the flow in a different way. In other words, the perspective of the higher-order minds has factual significance only if they can bend the World flow in ways that can never be accomplished from the basic rules alone.
This is the difference between weak and strong emergence. It's nothing new. Weak emergence is simply the fact that we recognize patterns that are meaningful
to us, while the rules are fixed at the lowest level. Strong emergence is the view that the higher-order patterns incarnate completely new rules of transformation that can never result from the basic rules alone.
From everything that ML speaks about one would say that he's most certainly all in for strong emergence. I personally have always thought that he implies it. But when he speaks of the CGOL examples I'm puzzled. I almost think that his reductionist and intuitive self are waging inner battle. He feels that higher-order perspectives should exhibit novel behavior (and this is even confirmed by his Platonic examples), yet when he needs to go down into the details, he is tempted to flatten everything and say "Well, fundamentally it's all just physics following simple rules". I'm not sure how conscious he is of this inner conflict. I really hope he faces it and resolves it, because otherwise, if his reductionist self stays in power, superstitions like those implied in the sorting paper, are only to grow more out of control (here is included also his views that AI/computation can also be considered a form of cognition - this is precisely the behaviorist ML speaking, who only looks at the quacking). I really hope he doesn't go down that road because everything else in the talk was really great. There's so much potential in his work to raise consciosness (putting aside the dangers you mentioned about modifying our bodies technologically, which are indeed another downroad).
And as always, it all stems from the fact that the
Philosophy of Spiritual Activity is missing, which alone could spiral the hysteresis into unity. Without it, one continues to bounce between intuitive, yet still instinctive flow, OR 'objective' behaviorist flattened view of reality.
On the other side, BK seems to have really embraced his dissociation. It has practically become a dogma, and naturally, all intellectual strivings are now seen simply as mere floating abstractions that can't even say what a 'thing' is. But I'm glad that he enters talks with ML. Even though he doesn't seem innerly moved (even though he acts politely), I hope that he can take something home for meditation.