--- title: Mechanism Script --- {%hackmd theme-dark %} ## Introduction ![](https://i.imgur.com/43pgk1j.jpg) - To understand the limitations of the mechanistic approach for the explanation of phenomena we need to measure the quality of an explanation. - We describe parameter sufficiency as described by Tudor M. Baetu and largely use his article (Baetu, 2015) to understand the quality of mechanistic explanations. - We focus our discussion on completeness and pragmality based around a synthetic phenomenon well known by our group members called Neural Networks. Finally we provide some thoughts on the capabilities of mechanisms and why they remain relevant despite the following arguments. ## Context ![](https://i.imgur.com/EJ7Kt74.jpg) - We typically have a certain approach towards mechanisms where these can take a general form and get more refined the more detail we desire to understand, where the question “Who broke the window?” implies a less detailed explanation as “How did the window break in 8 pieces?”, and we go deeper into the characteristics of the mechanism and its parts to reveal what produces this phenomenon. - But where do we end this refinement and when is the mechanism “complete” in the sense that it explains clearly what is responsible for the phenomenon? ![](https://i.imgur.com/c53mBxZ.jpg) - We establish the basis of measure of explanations around the idea of synthetic repetition, that is, ==our capability of reproducing a phenomenon consistently.== - We say that a mechanism is successful at explaining something when, no matter how complicated a synthetic reproduction of the phenomenon is, we have the certainty that if we measure the same initial conditions measured in the empirical case, we get the same measured result. - ==This is what we call a parameter sufficient explanation.== - Given this we also argue that a (possibly mechanistic) explanation is complete when it satisfies parameter sufficiency. ## How much detail yields parameter sufficiency? ![](https://i.imgur.com/Qyz9dEu.jpg) - Parameter sufficiency has a subtlety in its definition. - Because of its roots in measurement we are limiting the resolution of our explanations to the same resolution of our measurement and the environment of those measurements. ## In our example we may go to a machine learning engineer and ask him two questions ![](https://i.imgur.com/Eue64Cm.jpg) - In our example we may go to a machine learning engineer and ask him two questions which seem very alike but actually give a glance on what we mean by measurement. - If we go and ask “Can you train me a NN that can understand written digits?” and the engineer would happily say “Sure I’ll be done in an hour max”. - But if we were to say “Can you build me a NN that can understand written digits?” the engineer would proceed to say “Let me get a couple of Ph.Ds and maybe we can do it”. - ==This is because the measurement is different since the first question is asking for a well known method that is repeatable and understood, that is the process of training a NN.== - But the second question is asking to manually make the neural parameters that decide the states of the neurons without any training process, a task that is practically impossible to do from scratch, the best chance is to copy the parameters of another network. - Both of these synthetic reproductions end up with the same result, but what makes our understanding in NN lacking is the inconsistency of the first method since we never get two NN that have the exact same parameters, achieve the exact same accuracy, or use the exact same “thought process” to give predictions, whereas if we were able to build NNs through modifying the perhaps thousands of parameters we would have the necessary understanding about the phenomenon to systematically reproduce a predicting system like that and there would not be a need for any “learning” process. ## Conclusion ![](https://i.imgur.com/Q4xvgY6.jpg) - Given the previous example we see that ==the quality of an explanation and its level of completeness is proportional to the detail of its description and measurement.== - Therefore we decide to take the engineering approach such that a complete mechanism is one that explains its phenomenon with enough information to reconstruct it as its original description. - This gives us a baseline where we do not attempt to refine our mechanisms more than that we have described and we can have complete mechanisms. - That is with the understanding that careful attention must be paid to the description of the phenomenon and see if the description is accurate enough to satisfy whatever purpose we have. - And so, in the end we just delegate the completeness problem to the previous descriptive layer of understanding.