Quantcast
Channel: Paul Humphreys: Emergence – The Brains Blog
Viewing all articles
Browse latest Browse all 8

4. Conceptual Emergence and Neural Networks

$
0
0

Conceptual emergence occurs when, in order to understand or effectively represent some phenomenon, a different representational apparatus must be introduced at the current working level. Such changes in representation are common in the sciences but it has usually been considered in connection with changes in synchronic representations. Here, I’ll consider a diachronic example drawn from recent work on convolutional neural nets for image recognition.

To start, consider the types of representation in the table below. (This classification is not exhaustive – I have omitted conscious and unconscious representations as a third dichotomy, for example).

 

Type of representation Characteristic feature
explicit No transformations of the representation are required to identify the referent
implicit Transformations on the representation are needed to identify the referent
transparent Open to explicit scrutiny, analysis, interpretation, and understanding by humans
opaque Not open to explicit scrutiny, analysis, interpretation, or understanding by humans

 

An example of a transparent explicit representation is the axioms for arithmetic; an example of an opaque explicit representation is the contents of hieroglyphics before the discovery of the Rosetta Stone; and an example of a transparent implicit representation is the content of encrypted messages. What about opaque implicit representations? One example is a sinogram. These are the internal representations that are ‘seen’ by computer assisted tomography machines. Below on the left is an example of a sinogram; on the right is the resulting image when an inverse Radon transformation is applied to the sinogram, producing a transparent, explicit representation. The latter representation is of the input that produced the sinogram on the left.[1]

 

 

In virtue of transforming the representation on the left into the representation on the right, we move from an opaque implicit representation to a transparent explicit representation.

One of the advantages of many diachronic accounts of emergence is that they attempt to explain how emergence occurs, rather than leaving it as a brute fact. In cases of conceptual emergence, one way to do this is to turn opaque representations into transparent representations. Consider the use of convolutional neural nets (CNN) to generate efficient image recognition.[2] This kind of artificial intelligence is of interest to us because it seems to have many of the characteristic features of a system within which emergence occurs. It has the diachronic equivalent of levels, it employs novel representations at each layer, and although each layer depends on the content of the previous layer, the representation at each layer can be considered as autonomous. The success of early CNNs was not well understood because the inner representations were opaque and implicit. But with the right kind of transformations, using deconvolutional networks to map the representations in a layer back to the pixel space, certain kinds of opaque implicit representations can be converted into transparent explicit representations. Using this type of transformation the first layer of the network can be seen to have pixels as inputs and to represent edges. The next layer has those representations of edges as input and internally contains representations of combinations of edges. Moving from layer to layer in this way results in a representation that gives a correct classification of the input with high probability. The transformations that provide this representational content also reveal that there is an important element of compositionality to the representations contained in these CNNs and this compositionality undermines the original sense that emergent processes are occurring. This kind of detailed investigation into the relations between successive layers stands in sharp contrast to using abstract dependency relations, which leave the connections mysterious and unexplained.

There is still the question, which is open as far as I know, as to whether the transparent representations are an artifact of the transformations used. If any readers have insights into these, I’d be glad to hear them. For those interested in learning more about this area, the web site of the Human and Machine Intelligence Group at the University of Virginia of which I am co-director (https://hmi.virginia.edu/ ) has slides from a number of lectures on the topic.

Paul Humphreys, Emergence (OUP, 2016)

I’ll conclude with one important observation. There is unlikely in the near future to be a completely general account of emergence. One reason is that there is good evidence that we operate with several distinct concepts of emergence, ontological, inferential, and conceptual. Perhaps an account unifying these approaches will be formulated in the future but as things stand much time can be wasted arguing that a theory of emergence cannot be correct because there are plausible examples that do not fit it. Unlike causation, there is too much disagreement about what counts as a core case of emergence for such examples to serve as effective falsifiers. Yet despite this spectrum of emergence types, there is one feature that should play a role in any argument for or against a particular theory of emergence. It is that to be emergent, a feature must emerge from something (else). For conceptual emergence under psychologism, concepts emerge from other concepts through philosophical or scientific analysis. For inferential emergence, consequences can emerge from the original representations through ratiocination. For ontological emergence, how one thing emerges from another is reasonably clear in diachronic cases, as we saw in essays 1 and 2. But in what sense does one relatum of a synchronic dependence relation emerge ontologically from the other? Unless there is a clear answer to this question, we do not have an understanding of why cases of synchronic emergence appear and it is reasonable to conclude that synchronic dependency relations cannot capture cases of ontological emergence but only inferential or conceptual emergence.


[1] I have been told that expert radiographers can visually identify the content of some sinograms, although certainly not all.

[2] See, for example, ‘Visualizing and Understanding Convolutional Networks’, Matthew D. Zeiler and Rob Fergus, arXiv:1311.2901v3, 2013, although I note that at the current time there is no general agreement about the correct interpretation of CNN operations.

The post 4. Conceptual Emergence and Neural Networks appeared first on The Brains Blog.


Viewing all articles
Browse latest Browse all 8

Latest Images

Trending Articles





Latest Images