machine-to-human communication
Black box models make us feel uneasy. We want to have an intuitive grasp of how a computer reaches a certain conclusion. (For legal considerations, this is actually a must-have feature, not just a nice-to-have one.)
However, to exhibit such a capacity, a computer needs to be able to
- model its thought processes and,
- communicate the resulting model in a human understandable way.
Let us recall the correspondence between physical phenomena and cognitive models from the previous blog post on domains of cognition:
- Environment <-> Perceptions
- Body <-> Emotions
- Brain <-> Consciousness
Hence, the first step of a model being able to model itself is akin to it having some sort of consciousness. Tough problem indeed!
The second step of turning the (quantitative) model of a model into something (qualitatively) communicable amounts to formation or adoption of a language which chunks the world into equivalence classes. (We call these equivalence classes "words".)
Qualitative communication of fundamentally quantitative phenomena is bound to be lossy because at each successive modelling information gets lost.
- That is essentially why writing good poetry is so hard. Words are like primitive modelling tools.
- Good visual artists bypass this problem by directly constructing perceptions to convey perceptions. That is why conceptual art can feel so tasteless and backward. Art that needs explanation is not art. It is something else.
- Similarly, good companions can peer into each others' consciousnesses without speaking a word.
Instead of expecting machines to make a discontinuous jump to language formation, we should first endow them with bodies that allow them to sense the world which they can then chunk into equivalence classes.