This might boil down to a fancy way to say something that should be obvious, but I found it interesting. When we talk about “knowledge” in the abstract we usually think of our brains like we do our hard drive: I now have more stuff in there, delineated and built up from a set of logical statements and propositions — i.e. I could, if asked and given the time, explain and write out everything I know. Yet our “knowledge” can go beyond this, to skills that can’t be boiled down to logical propositions, but no doubt improve our efficacy. Imagine if our brain was a computer, but as we added more “knowledge” the processor got faster, it could handle more information at once, and more accurately and consistently found the right answer and applied it quickly. (Pretty much the opposite of how computers actually work; they work best the less they have been used.)
Well there’s a bunch of formal thought with this. I’ve touched on how just brain processes generally can affect players, but what about coaches and other non-athletic heat-of-the-moment decisions?
Tacit knowledge, by Daniel Little: Scientist and philosopher Michael Polanyi introduced the idea of “tacit knowledge” in his 1958 book, Personal Knowledge: Towards a Post-Critical Philosophy (Google Books link)[,] as a critique of the positivist conception of scientific knowledge and the idea of knowledge as a system of logical statements. Polanyi was trained as a physician before World War I; worked as a research chemist between the wars; and found his voice as a philosopher of science subsequently.
This []concept[] captures an important dimension of knowledge absent in most philosophical treatments of epistemology (“knowledge is a system of true justified beliefs”). The simple idea is that there are domains of knowledge [not] represented propositionally or as a system of statements, but []rather[] embodied in the knower’s cognitive system in a non-propositional form. This … is more analogous to “knowing how” than “knowing that”. Polanyi gives the example of a physician in training learning to “read” an x-ray. What is first perceived simply as an unintelligible alternation of light and dark areas, eventually is perceived by the experienced radiologist as a picture of a lung with a tumor. So the physician has somehow acquired a set of perceptual and conceptual skills that cannot be precisely codified but that permit him/her to gain a much more knowledgeable understanding of the patient’s hidden disease than the novice. . . .
It is unremarkable to observe that many aspects of skilled performance depend on “knowledge” that cannot be articulated as a set of statements or rules (link). The basketball guard’s ability to weave through defenders and find his way to the basket reflects a complex set of representations of the court, the defenders, and probable behaviors of others that can’t be codified. So it seems fairly straightforward to conclude that human cognition incorporates representations and knowledge that do not take the form of explicit systems of statements. Rather, these areas of knowledge are more “intuitive”; they are more akin to “body knowledge.” But they are nonetheless cognitive; they depend on experience, they can be criticized and corrected, and they are representational of the world. (Talk to a skilled athlete about a complex task like finding a shot on goal in hockey or beating the defender to the basket, and you will be struck by the degree of intuition, gestalt, and realism that is invoked. And the same is true if you talk to an experienced labor organizer or a police detective.)
What is striking about Polanyi’s position in Personal Knowledge is that he shows that [this does] not pertain solely to physical skills like wine-making, playing basketball, or piloting a tug boat. Instead, they extend deeply into the enterprise of scientific knowledge itself. An experimental chemist or physicist has an uncodified ability to interpret instruments, evaluate complex devices, or recognize unexpected results that is the result of experience and training and cannot be reduced to a recipe or algorithm. [Same goes for football coaches seeing patterns in defenses, etc.]
. . .This line of thought converges to some extent with arguments that Hubert Dreyfus advanced in What computers can’t do: A critique of artificial reason in 1972. (Here is his update of his position; What Computers Still Can’t Do: A Critique of Artificial Reason.) Dreyfus was fundamentally critical of artificial intelligence research in the 1960s, and the strategy of attempting to codify expert knowledge in the form of a set of rules that could then be implemented as computational algorithms. His position was a phenomenological one; basically, he took issue with the idea that cognitive competences like chess-playing, problem-solving, or pattern recognition could be reduced to a set of precise and separate rules and statements. Instead, there is a holistic aspect in ordinary practical knowledge that cannot be reduced to a set of discrete algorithms.
Of special interest for Understanding Society is the question of whether ordinary people have “tacit social knowledge” of the social world they inhabit (link, link, link). What is involved in the competence a person demonstrates when he/she successfully navigates a formal dinner or a contentious union-hall argument? Can the knowledge that the competent social participant has about expected behavior from particular individuals be represented as a sum of propositional beliefs? Or, more plausibly, is this a good example of tacit knowledge, more akin to a rough map of a terrain than a codified set of statements? . . . All of this makes me think that we need richer models of mental life and competence than we currently possess (link). [Coaching and executing on the field strikes me as as much about tacit knowledge as with any kind of formal knowledge.]