Federica wrote: ↑Mon Nov 11, 2024 2:54 pm
Clearly, Levin would not respond so in the face of the above, because the hybrid algorithm is run in
cell-view mode, thus it's not a representation of a one higher-order perspective. So the Turing machine example - as much as it makes it easier to get the iron necessity compared to an abstract sorting thought, because it stimulates sense perception - doesn’t seem to fit here? Since Levin’s sorting experiment is 'decentralized'. The "complex iterative computation" has been rewired, and it's no longer the model you are referring to, which implies an external coder, conceiving and executing the permutations 'from outside' the algorithmic space. For these reasons, as much as I am deeply critical to ML's research, it seems to me that this algorithmic simulation is structured with the successful intention to preserve the insight of no privileged scale of causation, insight which allows to model the "emergent" features as bottom up feedback. Do you agree?
Federica, there's a misunderstanding here. I'll have to think harder about an elegant and simple example that captures the problem.
For the time being, think of the IFS. There you know that we iterate the image according to the L frames. Each L frame is like an affine transformation function (scale, rotate, skew, translate)
f(x) = L1(x) + L2(x) + ...
depending on how many L frames we have. (the web IFS tool also leaves something of the previous image, so it would be more like f(x) =
k*x + L1(x) + L2(x) + ..., where k is some dimming factor, say 0.1, which leaves 10% of the previous image and adds the new transforms on top, but we can disregard this here). When we start with the initial image x, ...(f(f(f(f(f..(x))))) is our iterated function system.
It is possible to produce so-called hybrid fractal images by using two or more
sets of L frames. For example, if we use two different sets (let's call them f and g), we can transform the initial image once with the first, then twice with the second, then repeat: ..(..(..(g(g(f(g(g(f(x)))))))))
We can choose any pattern we want or as many functions as we want. We can even choose the next function to iterate with through some random generator. This results in interesting images that seem to combine features of both fractals. Most of the interesting artistic 3D fractals are produced through such hybrid functions:
Since it is a fractal image, it is very interesting how organic it looks. It's not simply having one half as the one fractal while the other as the second, but they are organically combined - the forms of f seem to be
made of the forms of g, which in turn are made of f-forms, etc.
The key point, however, is that this whole hybrid function is still one algorithm, a single Turing machine. The same holds for the sorting simulation. The algotypes are not floating independently in free computational space, they are simply functions that are
applied from within a greater hybrid function. The latter is still a valid Turing machine. It can be implemented in the marble computer.
To put it bluntly, all talks about bottom-up, top-down coders, independent cells, and so on, only breed confusion and obscure the simple fact that after all, everything is still a Turing machine, marble or otherwise.
I don't want to assert this fact just like that, and that's why I invited you to think about, at what point a marble machine can no longer implement the simulation of 'individual cells' following their algotypes. If you suggest that such a marble machine is impossible then how did ML implement the simulation? Didn't he write a program that applies the different algotypes according to cell type? By doing this, doesn't he practically write a new kind of hybrid algorithm that simply uses the algotypes as sub-routines? The confusion here can come only when we imagine that the sub-routines of this hybrid algorithm are somehow not part of the total Turing machine. So the following:
Federica wrote: ↑Mon Nov 11, 2024 2:54 pm
So, not only the metric is not dependent on the algorithm, but also every cell moves independently, and without knowing about the algotype of its neighbors, so the higher-order ghost pushed out of the door is not getting back in through the widow. In other words, there's no centralized, single-plane, algorithm implemented downward from a higher-order ghost who cares / doesn't care.
is incorrect because there
is a central algorithm - it's the total simulation program that ML wrote! Just because we don't know beforehand how this new algorithm will behave when run, doesn't mean that the whole program is not a single Turing machine. If we decouple from mystical feelings, the total simulation is not that different from the hybrid IFS which transforms the state according to more complicated rules.