Google
Support independent publishing: buy this book on Lulu.

Saturday, February 16, 2019

DEEP MIND





IT IS THE STRUCTURE and the TYPE OF COMPUTER


In an updated paper about the current possibilities of creating a functional artificial brain similar to the human brain, the weaknesses and strengths of such efforts are exposed in Science. The analysis begins with the question: ¿Can  machines think?, self-answered by Alan Turing, in 1950, who alluded  to: a) A mathematical negative formulated by Kurt Gödel and Church, Kleene, Boater and Turing, referred to digital computers, which despite having an infinite capacity, such machines in front of certain things or questions would not respond or would do it wrongly. Penrose suggested (1980,1994), that certain human brain molecular structures could adopt a state of quantum superposition and entanglement (more so, if electrons transiting through human neural circuits do so in ionic forms), paving the way for the future use of artificial quantum computers in order to equal human brain performance. b) Returning to Turing, he closed his point of view   postulating by that time (1950), that for machines to start thinking like a human brain the important thing was to imitate the complex biological neural computational systems, having as a guide the human neural circuits, imitation that to the present time  has made possible the creation of neural circuits systems imitating the cerebral cortex (deep network), constructed with successive layers of elements similar to neurons connected by artificial synapses, producing speech recognitions, complex games, translation of texts, computer vision, classification and segmentation of objects, capture of images, where someone  try to produce a short verbal description of an image, answer visual questions and of human communication about the content of an image, or non-visual tasks: analyze humor and sarcasm, comprehension and intuitive aspects of social things, serve as assistants persons, in medical diagnosis, automatic car handling. Despite this, there are problems to be solved: I) To improve the adjustment of learning through synapses to produce desired output patterns, conditioned by training at the inputs. II) Achieve a learning with deep artificial neural networks that go beyond simple memorization producing logical outputs, not necessarily programmed during the learning process. III) In this perspective, highlights the incorporation into AI (artificial intelligence), associated to deep neural artificial networks of the called: learning reinforcement: LR (mapping of situations or actions to maximize the signal of reward or reinforcement, not to take certain actions, but to discover through trial, error and reward the best option in order to modify the behavior). LR models combined with AI algorithms, are currently applied to video games, Go and Chess, reaching in this last level of world champions with only 4 hours of training. IV) However, the most notable differences between the biological circuit and artificial neural networks systems are those based on structure: biological neurons are complex and diverse in morphology, physiology and neurochemistry. The entrances to excitatory pyramidal neurons are distributed over very complex dendritic branches. Cortical inhibitory neurons exhibit different functions, none of these heterogeneities being included in artificial neural networks.  Biological cerebral cortical circuits are more complex than models of artificial neural networks, including lateral connectivity between neurons, as well as local connections, more extensive connections and connections up and down in the hierarchical cortical regions. V) It is expected that artificial neural networks will promote a real human understanding, in order to address broad aspects of cognition and general artificial intelligence (IGA). Meanwhile, these techniques continue to be perfected under the guidance of neuroscience. VI) There are other functional differences between biological and artificial systems: A) AI's current artificial models rest heavily on the empirical side using simple and uniform artificial neural network structures employing large sets of training data for learning. Biological systems carry out tasks with limited training, learning about pre-existing network structures already encoded in circuits before learning, with which insects, fish and pigeons, perform complex navigation tasks using part of an elaborate set of innate mechanisms with sophisticated computational capabilities. B) Therefore, the development of complex cognitive and perceptual activities in children with little training, in the first months of their lives is possible, recognizing they complex instruments such as their hands, following people with their gazes  and distinguish visually if the characteristics of certain animals are dangerous or not, while developing an incipient understanding of physical or social interactions, through unsupervised learning, given the presence of innate cognitive systems generated by evolution, which facilitated the acquisition of significant concepts and skills. Recent models of visual learning in childhood, show that significant and complex concepts are not innate or learned by the child, but are proto-concepts that provide signals of internal teaching guiding the learning system along pathways that lead to a progressive acquisition and organization of complex concepts with little or no explicit training. Sometimes, a particular pattern of moving images provides an internal signal for the recognition of their hands that helps them to manipulate objects guiding the learning system in the direction of their gaze. Innate structures implemented in cortical regions with specified connectivity warn initially of specific input errors. Perhaps in the future, these pre-existing structures could be coupled to artificial neural models to simulate human learning. Imagine computational learning methods starting from proto-concepts with structures inserted in humans or robots that learn to quickly become familiar with unknown environments in an efficient and flexible way, very different from the current learning procedures. Summing up: I) Following Shimon Ullman, we believe that each machine or robot of the future that must possess a human-like brain should it be  virginally inserted a basic code -not to obey commands- but to complete what it should be (or do), before each new situation or environment, using analogies, logic or emerging thinking solutions. II) Today's hyper-super-computers have no future for this purpose because: a') They use artificial neuron-artificial neuron transmissions, using electrons that circulate through metallic means. a'') Human neurons send messages to other neurons using ions that are more adapted to quantum models (entanglements and others), allowing almost simultaneous transmissions in all possible planes, including feedback-type ones. Although current computers transmit information in several planes, they lack artificial-totalizing organizer neurons, which to be functional should be spherical or pyramidal. b') Although the human brain allows the circulation of 20% of total human body blood at every moment, it does not get very hot, because the membranes of its neurons are covered by a fat content resistor that allows them to capture electrons from the environment and adapt them to ions that allow the transmission of the message to millions of other neurons . b'') You have to copy this resistor model and include it in a quantum computer. c') The problem of reduced space of the artificial brain is solved in quantum computers using artificial neurons with spherical or pyramidal shapes capable (or almost) of dealing with thousands or millions of other artificial neurons.

Labels: , , , , ,

0 Comments:

Post a Comment

<< Home