Round-table discussion

"Neurocomputers 10 years after"

Semantic neuroable network - following step of neurocomputing

V. Bodyakin

Institute of Control Science, Moscow, Russia

Comparing the position c by the stated opinions of the participants "of round desktop", it would be necessary to inform next point of view.

The absence expected grandees of successes in the field of neurocomputing (NC) and neuron nets (NN) can be related to absence of fundamental understanding of mechanisms of operation of a brain, and also to absence of deductive models of self-organizational processes on the neurosimilar environments.

The experimental neurophysiology on today cannot give the exact description of operation, both separate neuron, and their ensembles, owing to complexity of the investigated object (structure). But thus, there is no and ordering of models on a criterion "an adequacy - complexity" for engineering simulation in conditions of limitation of computing resources. Postulated by biologists the complexity of neuron, can be explanation evolutionary necessity alive to neglect efficiency for the sake of a reliability. To explain it is possible by the following metaphor. Present, that the modern chips partially save in themselves the previous stages(phases) of evolution: "tube ancients", "mechanical units" etc. Obviously, the complexity of such "chip" extremely increases, both to clear up and to understand anything in such "Babylon", is real impossible.

By one from main disadvantages of models of NC and NN, in my opinion, is representation of neuron as threshold unit with the algebraic sum of input signals. Obviously, that such conversion is irreversible, that upsets system principles.

The transition to vector , that is space - temporary transformation, removes repeatedly mentioned complexity of correlation by the central nervous system (CNS) of processes in various spaces (mechanical, visual, audio and so on). Meetings in CNS signals, constantly bear on itself sign of the previous phases of conversions, integrated in final neural structures, they are convertibles and interactions in a uniform purpose all spaces of tags, in which the given model functions.

One more advantage of a vector model, is natural simplicity at operation with temporary rows. Each image (neuron) which is included in consideration by neuron in hierarchy of the following level, has also a relative temporary component, that allows it easily to draw up on a timebase with other images.

By other essential disadvantage of existing representations about NC and the NN from a neurophysiology, apparently, is presence of primary links between neurons of a type each with each with casual allocation scaling with their consequent tuning under the concrete task.

It is known, that during arrival of the information in a brain there is a germination dendrites of links, especially on the initial stages of entry of an organism in an environment (for the person, it is first two months of life). On the other hand, information passing through generically specific reception a part CNS, already creates some structures, which are quite capable to form new links reflecting actual processes in the external environment. Thus the time of learning or so-called of "training" is reduced to a minimum, and the amount of links not is very large "N", and small number in bounds, for example, k = 5 - 7.

By one more characteristic, which significance while is not marked in any way in models NC and the NN is - " a compression of images ". We consider, that the real system which is capable of developmental self-organizing, should process information streams, which repeatedly, on an amount of images (processes) surpass final number of neurons, displaying them, (i.e. two-stable units, final automatons, neurosimilar units etc.). In actual life in such system the information stream of the constant of power acts. As with it to manage, to process it, taking into account, that the amount of neurons is limited and much less necessary for imaging - here the task for evolution (which it, by the way, rather successfully has decided).

By one from possible solutions of this task we see in use of property inherent in tree structures. Having k of entrances and one output the neuron represents convolution k of images in one. The hierarchy containing m of levels neuron similar of units, derivates the degree function of a compaction - k m.

Compunction, it is not simples data compression, as end in itself, it, first of all, structurization of the information (data) at solution of any task.

Then, at construction the NN is possible to not set the NN is a priori a rigid structure, and, by giving sufficient resource from design units (neurons) to start the process of its generation under the control of current information streams. They will define also number of neurons in a layer, both number of layers, and amount of hierarchies from them.

By the way, such model can explain, why the visual area is "on a nape", and acoustical in temple of area of a brain. Structurally more difficult information requires greater number as it is possible of less dependent stages of processing, and accordingly maximum distance (from the previous stages), as determines their straight-line extent. But it while, no more than hypothesis.

On "round desktop" discussion unexpectedly has inflamed, how many it is necessary of neurons for solution of the practical tasks. One participants asserted, that 2-3 it is quite enough of tens, and they cannot present the task, when a lot is necessary. Others, on the contrary, stood that without thousands or millions neurons they at all can not will be risen to the tasks.

Obviously, that here there was a speech about the tasks of different classes. In an organism one can be presented, as the tasks of the elementary perception, when is real of ten units (neurons) enough, that in the given space of tags to be set up on the necessary image.

And second class of the tasks are the tasks CNS. When become huge not only difficult spaces reflecting data domain, but also neuron any more can not be the simple algebraic summation. It should already reflect logically complex sequence coming activity from periphery.

Neuron as the processing unit owes to evolution from the simple summation up to complex, is informational converted "vector associator". Only after that the further evolution of information systems as biological sorts, from conditional - reflecting to reasonable is possible.

Is obvious and following from here answer to a question on necessary number of neurons. For construction of the elementary models (from an insect and up to a pigeon sorting on the pipeline a details), the limited number of neurons and already of known models NN suffices. In attempts of simulation of human mentality (solution of the complex tasks), is clear, that in ten neurons to not do without any more, as number one only of concepts (images) at the person more than one million.

Here the elementary quantitative classification of the tasks for NS also follows. A simple independent perception - tens neurons, accordingly, one class NN. Integral processing perception of the information from all sources - millions and billions of images, i.e. CNS - other class NN.

The following question: - "Connectionism or oscillatorism a model?". Why or? Connectionism is effective in a static, as the basis of the mechanism of memory, oscillator a model is more effective in dynamics, for example at ranking on associative closeness of images. The overlapping of these two models allows to build simples and clear construction, which already decides set of the tasks.

One more problem, which should be decided by NC is a possibility automatic structurization of large number of input tags, - procedure inaccessible for it of logical understanding by the person in an actual-time scale. In the process structurization there is some intercoordination and imaging activated reception of tags in one concrete image (neuron). And the space of imagines is minimum on an amount of images and rather functionally covers a pragmatics of probable operations of the concrete information system.

The introduction a recourcing minimum, as amounts of functional units (neurons) of the information system displaying it data domain, is already sufficient condition for construction of the effective function of self-training.

In general, with development of NC a paradigm of realization of experiment in essence will vary. Earlier contributor, analyzing a subject of a research and carrying out experiment, directed all efforts on selection of limited number of the most informative parameters and on rise of their accuracy of instrumentation and tool. From our positions it is possible to mark, basic limitation of this approach. Rise of a selectivity (the ratioes the signal / noise) results in complicating instruments and rise of their cost. Besides, 5-10 parameters (and are usual, in general 1-3) do not reflect dynamic characteristics of researched objects.

So here, in the future the NC opens possibility operating practically by unlimited number of parameters, with a ratio the signal / noise hardly is more than unit. The small cost of gauges and rather effective self-training of neuron nets (NNs), will allow to atomize the process of construction of a model of researched area. Thus the task of the experimenter will be reduced to a choice of preferable solution from set, showed by the system. It is a new, more effective stage in experimental researches.

Today world literally chokes under an avalanche of the information, not of knowledge (!). It is the superlarge dataflows resistant to formal structurization, of which the incompleteness, uniqueness and extraordinary dynamicing is characteristic.

Other important characteristic of the today's world are areas, where there is a prohibition of experimenting by a method of "cuts and tries", as each error can result in irreversible global consequences. On the other hand, the correct and agreed solutions allow repeatedly to improve the characteristics all vital for mankind of processes.

Therefore, there is only path "of advanced simulation". The construction of classical formal models of such objects is impossible owing to, absence of the theories and vehicle for it, and also owing to Psycho-Physiological of limitation, both person, and collectives of the experts. By the unique(sole) tool, satisfactorily resolving all this spectrum of characteristic problems, can become NC and, in particular, neurosemantic structures (NSS), developed in IPC of RAS [1,2].

The concept of NSS is based on ability of the natural languages to the self-description and on properties of the homogeneous computing environments [3]. NSS has possibility of self-organizing [2], that practically will allow to decide the designated above problems, including on structurization and automation of processing ("semanticsation") of increasing information dataflows in all areas of human activity. The quality of self-organizing NSS is characterized by rise of specific information capacity of neuro-similar units displaying information streams. The mechanisms of self-organizing are based on hierarchical - network and neuro-linguistic conversions [2]. All technically important characteristics such NSS as information system (compression, reliability, access time) tend to improvement in process of growth of information contents, surrounded by it.

If to direct NSS on processing of superlarge not structured streams of the information that at once, at 10-20 of time, it is possible to increase productivity of research activity (on data science metering, scientific researches are duplicated on 90-95 of %). According to the same progress it is possible to receive, in all areas of human activity, since the operation with the information surrounds already more than 50-80% of the labor population in the economically advanced countries and this tendency of growth of information sector of economy is saved.

The hardware of computer facilities are already capable to store and to process the information accumulated by mankind for all it a history. It is necessary only to organize this global the process and here without NC to not do without.

Boundaries at NC yet it is visible. If we already deductively have guessed (or still we shall guess) a mathematical model of neuron and self-organizing structures from it, that, having left from "of biological element base" (100 Hz, and millions images) on Giga-Hz and Tera-images, which besides, continuously will work for 24 hours per day, we shall receive something similar to Superbrain, or - Superconsciousness (with consciousness own "EGO").

It will be a new branch of evolution of Reason.

The person and Superbrain will be inter complementar, deciding the common task - knowledge of the Universe.

The person from Superbrain can receive new more effective tricks of thinking (as well as aboriginal, standing at a level of primitive societies, adopting from "of Europeans" their culture of thinking instantly up to their level of information processing) qualitatively mounted.

For Superbrain the Person are billions it an eye, billions it of sensations.

Integration of the Person and Superbrain are new opening, which efficiency cannot today even be anticipated. This acceleration sciential-technics progress in the degree function, "each day - doubling of knowledges" (instead of amount byte). The horizon of our knowledge Meta-galaxy (as visible part of the Universe at 15-20 of billions of light years) will extend. And further, any more knowledge, and Creation.

"A Fantasy, fiction... ", - somebody will tell from the readers, and He will be right, if We shall not construct self-organizing information structures, those that the nature already has found and has constructed. We should them construct, being guided by already by that nature has given us - Mind.

PS. An unexpected discussion on how many neurons are needed for problem solving arized during our meeting. One group of participants affirmes that two or three dozens of them are sufficient and they cannot imagine a problem damanding more neurons. Other researches assert that they cannot solve their problems without considering thounsands or even millions of neurons.

Evidently, problems they are considered belong to different classes. It is really possible to state a simple perception problems for which a few dozens of neurons are sufficient.

The second class includes the problems concerning CNS. In this case not only hierarchical complex spaces reflected problem domain but also a neuron itself become very complicated. A neuron have to evolve from the simple summator to the complex (reversible) “vector associator”.

So, simple quntitative classification of neuronal structures should suggest that simple independent perception forms the first class (few dozens of neurons) while integral processing of perceptual information from all the sources - millions and billions patterns , i.e. CNS - forms the second class of them.

Another feature of neurocomputing is connected with the possibility for automated structuring of large number of inputs. This procedure is very hard both for real-time processing and also for logical analysis. Structuring leads to a binding of receptive features in one pattern (neuron). Mapping space is minimal in the sense of patterns number and functionally is quite enough for effective operation of information system. It also gives a criterion for the development of effective system for self-training.

Neurocomputing also changes an experimental paradigm. Before, the researcher tried to separate limited number of valuable parameters and to increase the precision of measurements. But the increase of the signal/noise ratio means also an increase of the expenses and complexity of devices. Now, neurocomputing give us the possibility to handle with practically unlimited number of parameters for which signal/noise ratio slightly exceedes unity. Low cost of detectors and effective self-training of neural networks gives the possibility to automize the building of the model. Experimentator only needs to choose the best solution. It means the arizing of a new effective stage in research field.

E-mail: body@ipu.rssi.ru

Literature:

1. Bodyakin V.Information hierarchical - network structures for representation of knowledges in information systems, Problem-oriented programs (model, interface, learning). Ì., ICS RAS, 1990, 24-36p.

2 Bodyakin V. Where go, person? The Bases of science about evolution (information approach) -M., SINTEG, 1998, 332pp., 61 fig., grant RFFR ¹ 97-06-87017.

3 Evreinov E.V., Prangishvili I.V. Digital automatons with a customized structure. Ì., 1974

Hosted by uCoz