Abstract: We have proven that any implementation of the concept of `copy number' underlying Assembly Theory (AT) and its assembly index (Ai) is equivalent to Shannon entropy, and not fundamentally or methodologically different from algorithms like ZIP compression. Here we show that the weak empirical correlation between Ai and LZW, which the authors have offered as a defence is based on an incomplete and misleading experiment. When the experiment is completed, the fast asymptotic convergence to $\mathbf{LZ}$ compression and Shannon entropy is undeniable, just as their mathematical proof of equivalence remains undisputed. This contribution completes the theoretical and empirical demonstration that any variation of the copy-number concept underlying AT, which entails counting the number of object repetitions `to arrive at a measure for life,' is equivalent to statistical compression and Shannon entropy. We demonstrate that the authors' `we-are-better-because-we-are-worse' argument against compression does not withstand basic scrutiny, and that their empirical results separating organic from inorganic compounds have not only been previously reported, sans claims to unify physics and biology, but are driven solely by molecular length--which they did not control for. We show that Ai is a particular case of our BDM index, introduced almost a decade earlier, and that arguments attributing special stochastic properties to Ai are misleading, since the properties of Ai are not unique, but rather exactly the same as those that Shannon entropy possesses and for which it was designed in the first place--for the quantification of uncertainty--and which we have proven to be equivalent to Ai. Shannon entropy is already not only equipped with stochasticity but was designed for. This makes AT redundant especially when applied to their own experimental data.
Loading