Google
Support independent publishing: buy this book on Lulu.

Tuesday, May 07, 2019

ALTERNATIVES TO OXYGEN UPTAKE, TRANSPORT AND DELIVERY






TEMPORARY REPLACEMENT OF HEMOGLOBIN, RED BLOOD CELLS AND  OXYGEN UPTAKE

There are many patients in this world with pulmonary fibrosis (permanent respiratory failure) and severe aplastic anemia (failure to produce blood), being the treatments based on oxygen administration and blood transfusions. Although in the last decades lung and bone marrow transplants have been started, some problems persist (rejection of tissue transplanted due to tissue incompatibility, lack of donors and so on), being required broader medical visions to solve these diseases. Perhaps by turning our gaze towards nature, we can find successful answers, such as the temporary (or permanent) substitution of human blood and/or obtaining environmental oxygen (O2), by alternative mechanisms to those carried out by human lung alveoli. To the family Channichthyidae, belongs a vertebrate fish (icefish), native of Antarctica, with black fins, lack  of scales, transparent bones and colorless blood, lacking in pigments of hemoglobin (Hb) and  red blood cells,  which obtain O2 from the cold Antarctic waters by diffusion to back pressure through gills towards their blood plasma. Neither blood has to be red, nor oxygen transport always have to be linked to the hem fraction of red blood cells. In most environments, icefish mutation (s) would have been fatal, but not in the frigid Antarctic waters where O2 is much more dissolved than in warm waters. Regarding the emergence of Hb, some scientists believe that it should go back to the origin of the first cells that tested a series of pigments to choose it. According to Nature Ecology & Evolution,in icefish, their genomes developed adaptations: extra genes, to produce antifreeze blood proteins, increase of enzymes to protect the tissues from the side effects of O2, in the blood. Only vertebrates have red blood cells and Hb, given the extreme natural affinity of the body for O2. Invertebrates use other metalloprotein pigments in their blood. Insects and arthropods use hemocyanin, a bluish pigment that contains copper and others: greenish chlorocruorine. The first cells were urged to mobilize electrons (oxidation-reduction), outside and inside their limits as part of their metabolism, generating molecules in the form of rings (porphyrins), which retained an iron or copper atom which developed great affinity for the O2. Hb is the interconnected product of 4 globin proteins, each holding a heme. According to Mark Syddall (American Museum of Natural History), the first cells breathed by simple diffusion. By then, Hb was ready and, with each O2 molecule trapped by the pigment, the following were more easily captured. The Hb turned out to be very efficient to capture O2 from the open air and from the lungs, gradually releasing it to tissues deprived of it. The hemocyanin used by invertebrates was less efficient than Hb, to capture O2 because it did not work collaboratively as the constituents of Hb. However, human Hb does not behave efficiently when O2 is low, decreasing its effectiveness with the fall in temperature. Conversely, for octopus that live near the floor of the cold ocean, hemocyanin is better. In insects the equivalent of blood is hemolymph, a clear liquid, which contains small amounts of hemocyanin, that  can remain in the hemolymph by storing O2 for later use. Although another pigment: hemerythrin, only has ¼ of the capacity of the O2 of Hb, suitably serves worms that use it. Although insect pigments do not have a high affinity for O2, they do not need red blood cells to maintain it. On the other hand, because hemocyanin, hemerythrin and other pigments are bigger, they have to polymerize to molecules, keeping metal atoms attached to O2, away from casual metal interactions. Hb is small and its heme highly reactive and toxic, so the liver produces haptoglobin to remove Hb from broken human red blood cells. Hb also has a high affinity for nitric oxide, so that an excess of free Hb can capture blood nitric oxide that potentially can cause hypertension and reduced blood flow to body organs. On the other hand, the naked heme molecules attack the membrane lipids damaging other structures. Proteins isolated from globin can clog the filtration system of the kidneys. As seen, the evolution of red blood cells was optimized for a better distribution of O2: ejecting its nucleus and other organelles producing greenish compounds (biliverdin), less toxic. By packaging the Hb inside the red blood cells, its toxicity was  avoided. Hb is not the ideal molecule to transport O2 in all circumstances, being possible to avoid   the use of red blood cells and Hb, replacing them with other pigments like to those used by current invertebrates. With the current technology, perhaps it would be soon possible to use artificial gills attached to the most capillarized parts of human skin, injecting temporarily through it backpressure O2 and simultaneously  expelling  CO2.

Labels: , , , ,

Wednesday, April 10, 2019

AUTISM



AUTISM
By the way, autists have another type of intelligence (visual, verbal), intensely studied, at present. Einstein (virtual experiment of the elevator), Schrödinger (virtual experiment of the live-dead cat), Tesla (mentally designed machines and then built them as soon as possible), Bobby Fischer (mobilized imaginary pieces of chess, on an ad hoc board painted in the roof of his bedroom) and other prominent famous men of science, are remembered for their mental-visual experiments. Einstein's story is famous describing a fictional-visual journey mounted on a ray of light, trying to understand what was happening in his environment as he progressed. Were autistic, some of the geniuses mentioned? Maybe, maybe not; but that they had almost autistic visual minds, no doubt about it. Much progress has been made in the knowledge of autism: from considering it a mental disability to valuing it as another type of mind. Currently, accelerated progress towards learning sort of visual-virtual type is visible, one capable of simplifying the teaching-learning to seconds or simple hits. It is known that autists have very developed functionalities in the occipital and prefrontal brain lobes associated with peculiar processes for reasoning and processing information, and it is expected that some of these (especially visual ones) will be incorporated into the baggage of future learning-teaching generations of children with standard (normal) minds. Teaching-learning will give a big turn of the screw based on autistic visual principles because their procedures are more accurate and real. Not all autistics exercise their intelligence in the same domains (visual, language) and each autistic has a different intelligence with different brain bases. Leo Kanner in 1940, described children who, beyond their apparent disinterest in their human environment, presented significant delay in oral language: they began to speak using a particular language: seemingly non-communicative repetitions, great verbal memory (remember bus schemes, read with ease historical and musical facts).  Alternatively, Hans Asperger described children with understandable initial language, normal intelligence or above average, intense interest in a particular area, original intelligence above the ordinary, albeit with certain inability to adapt to their environment to the point that some spoke little, nothing or atypically, while others were totally dependent on their environment, to survive. However, almost all exhibited cognitive tasks at a high level in a particular area: knowledge of letters and numbers from 2-3 years old, or execution from the 3 years of puzzles, usually solved by children of 5 years. It is known that in some autistic type Kanner or Asperger, there are no genetic anomalies other than those observed in the general population, however some genetic anomalies have been identified in fraternal brothers, with autism. It is also known the syndromic autism (deletion in multiple parts of the genome, affecting a 1/10 of people with autism). Autistics without delay in the initial oral language will be excellent in verbal reasoning, vocabulary and general verbal knowledge. Beyond these differences the activated occipital lobe, in autistic is apt to develop a particular expertise in certain fields to which they dedicate considerable time and energy. When the brain activity is recorded in normal volunteers, doing tasks, the activity is distributed in a vast brain neural network, mainly in the parietal and occipital lobes, activating in autistic and non-autistic, the same brain network of reasoning. When autistic people are compared with non-autistic people, while they solve reasoning problems, autistic people have a higher level of activity in the occipital lobe and less in the cortex of the prefrontal lobe. However, the most active areas of the autistic are the visuals associated with the development, maintenance and manipulation of mental images. The most active areas of the non-autistic are those associated with work and verbal memory and the generation of hypotheses. It is assumed that the modes of reasoning differ between autistic and non-autistic, with visual perception being more related to reasoning and intelligence in autistics. On the other hand, we know that complex reasoning and the capacity for abstraction are based on good communication between brain regions associated with reasoning and that the complexity of reasoning is associated with greater activity in the brain areas of reasoning. In autistic people there seems to be less communication between the different regions of reasoning and that a lesser modulation of this communication depends on a lower complexity of the reasoning. However, in autistic people, communication in the occipital cortex is more active during reasoning. However, in autistic patients, communication between the occipital cortex and other regions is greater if the complexity of the reasoning is greater. They confirm the increased role of visual perception in the processes of fluid reasoning in people with autism, which diminish the need to use well-adapted tests to measure autistic intelligence. So, it would not be appropriate to present blind visual sequences to evaluate their intellectual performance. Therefore, people with autism look disadvantaged by the type of equipment and tools used to assess their intelligence. When we present them open oral questions, without visual aids and no choice of answers for them to organize, we are underestimating the intellectual capacity of people with autism. When one asks complex and abstract, written or graphic questions with choice of answers to guide thought, one can highlight much higher reasoning skills. In a test like the Raven matrices, to measure fluid intelligence (reasoning, thinking logically, inferring solutions to new problems) or, in similar tasks, autistic people are good or excellent. It is good to adopt this statement to promote learning, in autistics, presenting information in a comprehensive and organized way, allowing them to organize, manipulate and classify it, making it easy to learn, by making it correspond more to their spontaneous learning. What agrees with the information collected in many cases of autistic children, who learned to read, work on computers or play the piano, by themselves, using abundant material identifying patterns and the underlying structure of arrangements, letters, numbers or notes. Experimentally, children with autism learn better to distinguish 2 groups of stimuli: a) if they are shown all the stimuli at the same time, observing differences and similarities. b) The learning is less significant if they are presented with a stimulus at the same time (previous classical path, for autistic). Present only one item at a time, deprive autists of the information they need to learn optimally.


Labels: , , ,

Sunday, March 31, 2019

PLASTICITY OF HUMAN BODY TO EXTREME CONDITIONS



IS THERE A HUMAN BODY PLASTICITY? WHAT ARE ITS LIMITS?
According to 2 notes written in LaRecherche, a little more than 15 French scientists led by Samuel Vergès (Hypoxia Physiopathology Laboratory of Grenoble, INSERM, University Grenoble Alpes), are  studying now  in La Rinconada-Peru (5100-5300 masl: meters above sea level, city of 50 000 inhabitants, dedicated to mining activity), the effects of hypoxia on sleep, physical exercise, genetics, hematological field and cardiovascular adaptation of this population as well as certain chronic health problems that affect 25% of permanent residents of  La Rinconada. It is known that in Mount Blanc (4807 masl), the oxygen content of the inspired air is scarcely 50% of the existing at sea level, while in the summit of the Everest this gas is barely 28%. It is also known that after a few weeks of adaptation to the environment, most newcomers to hypoxic environments increase their production of red blood cells to better transport oxygen, especially muscle and brain tissue, which are very sensitive to hypoxia. Thanks to these adaptations, more than 250 million people in the world live above 2,500 masl, thousands of permanent residents inhabit cities above 4000 masl in South America, Tibet and the Himalayas, while hundreds of hikers ascend every year up to 4000-5000 masl, without cylinders of oxygen. To adapt to hypoxia, the human body has chemoreceptors (nerve cells), sensitive to changes in blood oxygenation in different body parts, which in case of hypoxia, induce the respiratory and cardiovascular systems to increase their respiratory and cardiac rates, partially compensating for low oxygen pressure, improving arterial oxygenation and increasing blood flow to various organs. 1) So, does the human body in toto dispose of the capacity to adapt to any extreme environment? There are reports of remarkable corporal changes in environments with prolonged weightlessness. 2) What will happen when we will inhabit during several generations: Mars, Ganymede, the Moon, etc. Will we suffer bodily adaptations or genetic adjustments (Bigham A.W. 2018)? 50% of inhabitants who live at sea level and who travel to hypoxic environments of more than 4000 masl and ascend quickly, suffer from acute mountain sickness: headaches, nausea, fatigue, tinnitus that can be disabling and  which however,  can resolve spontaneously or with rest, although in some cases these effects can induce pulmonary or cerebral edema, which could lead to death. There are, however, some differences: some ascend Everest without oxygen cylinders, while others develop pulmonary edema at only 3500 masl. It is argued that some Sherpas and Tibetans would have developed genetic modifications to adapt to hypoxia and that 5 to 20% of permanent residents of high altitude (above 3500 masl), suffer from chronic mountain sickness or, Monge's disease (exacerbated production of red blood cells, promoter of an  increase in blood viscosity, which increases cardiac overload causing serious cardiovascular events, persistent headaches, neurological disorders and alterations in blood flow). With these new studies, Samuel Vergès hopes to answer: 3) why do some and not all residents of high altitude suffer from chronic mountain sickness? To answer this question, Vergès now has an excellent methodological design, advanced techniques, new ideas and an excellent spirit. 4) How does the human body adapt to high-altitude hypoxia? According to Vergès, outlining answers to this question will allow developing strategies to develop better performances in elite athletes, better understand certain extreme lung diseases and even prolong life. There will not be a single answer because it will depend in part on the interindividual peculiarities of   each inhabitant of the Rinconada. According to Vergès although it is considered impossible to live beyond 5000 masl, the residents of La Rinconada have developed physiological adaptations that allow them to tolerate hypoxia, in a more or less acceptable way. For Vergès, high-altitude hypoxia is a challenge for humans, both for residents from sea level, who arrive for the first time at great altitude, and for permanent residents, who have developed adaptations during generations. Preliminary studies of the French scientific team conducted on 800 residents of La Rinconada have identified 25% suffering from chronic mountain sickness. 5) Residents under study, have been divided into 2 groups: a) those who suffer from the effects of chronic mountain sickness with a symptom’score greater than 10 and b) others with few symptoms with different safety profiles at height. 6)  50 residents divided in 2 groups of 25, to whom samples have been taken for genetic, epigenetic, biological and hematological studies. Their states have been evaluated: vascular, cardiac, respiratory and cerebral, including a sleep assessment by means of polygraphy and a stress test. Most of those with chronic mountain sickness show hematocrit values: greater than 80%, being their blood very viscous. 7) With these findings Vergès fears that classical hypothesis (what suggests a direct link between excessively high hematocrit and symptoms of chronic mountain disease), be outdated. First results obtained with ultrasound of many of these inhabitants show a large dilation of  brain and arms arteries, in order to maintain an adequate blood flow in spite of the high blood viscosity, although this adaptation in the long run, may alter the ability of these blood vessels to dilate more, if necessary in some organs (due to the need for oxygen and nutrients). To this respect at the moment it is thought that these arterial dilatations allow to tolerate very high hematocrits and that great dilation of blood vessels could be responsible for the symptoms of chronic mountain disease (without the hematocrit being so important), causing the deterioration of affected people to tolerate certain health problems. The development of sleep apnea induced by hypoxia during sleep could favor the development of pulmonary arterial hypertension. This study  could help to improve blood viscosity, reduce the deleterious effects of the cardiovascular consequences of chronic hypoxia, make recommendations for transient decrease at a lower height, determine the relationship between genetic, epigenetic and physiological specificities by comparing bodily parameters  of Peruvians living at sea  level  ​​with  Peruvians living at different altitudes including those of  La Rinconada with and without high altitude intolerance.

Labels: , , , ,

Wednesday, March 20, 2019

CONCIOUSNESS







TWO   THEORIES

I)According to Giulio Tononi (director of the sleep and consciousness research center, University of Wisconsin-Madison), consciousness is an intrinsic property of any cognitive network with specific findings in its architecture. A theory baptized by Tononi as Integrated Theory of Information: IIT, which stipulates that to be aware is to have an experience (dreams or anything). And, although a human being can have "blank mind" states usually achieved through meditation, this is also a conscious experience. Tononi and Christof Koch (Director of the Allen Institute: Brain Science), have established the essential characteristics of conscious experiences. They are: subjective (they exist only for the conscious entity), structured (their contents are related to each other: the blue book is on the table), specific (the book is not blue, it is red), unified (there is only one experience at a given time) and definitive (there are connections to the content of the experience). With these axioms, Tononi and Koch have deduced the properties that a physical system should have to get some degree of consciousness. The IIT does not describe consciousness within the canyons of information processing, but as the causal power of a system to differentiate the conscious experience in itself. For Koch, the conscience is a system of abilities that acting by itself in the past is capable of influencing its own future. The more power the system has over causes and effects, it will generate more awareness. For Tononi and Koch, it is not possible to relate the emergence of consciousness to systems in which information is hardly converted into inputs-outputs (zombie digital computers), which, although they can simulate acting as conscious computers, lack such property. According to Koch, in order to be aware, digital computers must have the correct hardware, adding that any system (organism, artifact), with the required network architecture, may have some awareness. John Searle (Philosopher of Mind and Consciousness, University of California, Berkeley), labels the IIT theory, as a form of panpsychism (belief that the mind and consciousness invade the entire cosmos). It is not that this theory is false, says Searle, but that it does not even reach the level of false, adding that consciousness comes in units and panpsychism is incapable of specifying them. Although Koch and Tononi believe that consciousness can be an attribute of many things, a significant amount of it only exists in particular kinds of things: in specific areas of human brains, with consciousness being an elementary property of living matter. Koch and Tononi have specified criteria to identify what kinds of things are aware, emphasizing that being the conscience a special network of information processing, there is a measure of integration of information: Φ, amount of irreducible cause-effect structure: that explains how the cognitive network as a whole can influence itself, depending on interconnectivity and feedback. If a larger network is divided into small networks that do not exercise causal power in others, it will have a low value, no matter how many processing nodes it has. For Koch, any system whose functional connectivity and architecture provides a value of Φ greater than zero have a minimum of conscious experience, including regulatory biochemical networks of living cells and electronic circuits with correct feedback architecture. Even simple matter has a minimum Φ (atoms can influence other atoms). However, systems that have enough Φ to "recognize" their existence like us, are rare. II) The other theory held by Stanislas Dehaene (Collège de France, Paris), argues that the behavior of consciousness is born when someone takes a piece of information from a global area of ​​work (Global Workspace: GW) contained within the brain, from where it broadcast to brain modules associated with specific tasks. 

Resultado de imagen para global workspace dehaene

The GW, provides a kind of bottleneck information, characterized by the fact that only when the first conscious notion slips away, another can take its place. With the help of brain imaging, Dehaene has studied these bottlenecks, networks of neurons in the cerebral cortex. For him, consciousness is created in the prefrontal cerebral cortex, by the workspace itself, characteristic of any procedural information system, capable of broadcasting information widely to other processing centers. Based on the above, Hakwan Lau (psychologist, University of California, Los Angeles), believes that GWT is mostly related to function and cognitive access, while IIT is primarily related to phenomenology. Faced with these 2 structured theories the Templeton World Charity Foundation, will finance the scientific checks of these 2 theories, having as coordinator to Dawit Potgieter. According to Koch, only the GWT and the IIT are quantitative and predictively verifiable. Potgieter who plans a structured collaboration of these 2 adversaries will make available to them a series of techniques to monitor brain functions: fMRI, electrocorticography and magnetoencephalography for 3 years, involving 10-12 laboratories. A recent study conducted by Dehaene analyzing with fMRI the brain activity in conscious volunteers or under general anesthesia, showed 2 different patterns: a) during unconsciousness brain activity persisted only in regions with direct anatomical connections, while during b) the conscious activity the complex long-distance interactions did not seem to be restricted by neural wiring. According to Koch, we will soon have smart machines that will model most of the human brain characteristics and be aware. It is concluded that the 2 previous theories were   born from the principle that the brain functions as a supercomputer with special characteristics (quantum  type?), whose purpose is to maintain a constant, instantaneous and permanent control of the totality of hundreds of thousands of bodily functions. And, as constant body control can also mean "prevent a bodily malfunction", we will remember the proposal of Antonio Damasio (neuroscientist, University of Southern California), who described consciousness as an emergent process (Book:Self comes to Mind,2010), responsible for maintaining and controlling the normality of absolutely all human body physiological systems, creating the evolutionary need to recognize our bodily self and any bad bodily function that could make our body useless. For this reason and according to the same Damasio the out of body experience is promoted by the need that the conciousness leave a body in crisis (imminence of death or others), in order to continue monitoring from the outside the good functioning of the body.

Labels: , ,

Wednesday, February 27, 2019

CANCER EXPLAINED




CANCER WHAT'S NEW?


Taken from PNAS

Among several 3 news, related to the positioning of holistic concepts, extracted from observations made in humans and animals -and their respective validation- to prevent or eliminate cancer. I) The observation that: a) elephants and naked mole rats rarely acquire cancer, while ferrets and dogs acquire it at very high frequencies. In 1977, Richard Peto postulated that cells of   large bodies suffering more cell divisions had more risk of acquiring cancer and, that other small and short-lived: not. But, when Peto studied the incidence of cancer in large animals, he did not find what his theory preached (Peto’s paradox). In 2015, Joshua Schiffman (High Risk Pediatric Cancer Clinic at Huntsman Cancer, University of Utah) and collaborators, discovered that elephants had 40 copies of TP53 suppressor tumor gene (also present in humans and animals), which, when detecting irreparable damage in DNA, they promote the death of the cells involved, avoiding cancer. Now, Schiffman plans to introduce TP53 genes into humans, via nanoparticles. On the other hand, in 2018, Vincent Lynch showed that the elephants also had 11 extra copies of the gene: leukemia inhibitory factor (LIF). A copy of LIF6 is activated by a TP53 in response to DNA damage. b) Naked mole rats (Heterocephalus glaber), a species of mice that live more than 30 years, typically exhibit an extracellular matrix. Keeping some space between the cells they reduce the risk of cancer. c) The resistance to cancer of the South American capybara (Hydrochoerus hydrochaeris), is explained because although the insulin signals of these animals allow them to grow more than their ancestors, the collateral effect is that   regulating (counteracting), this growth, turn their hypervigilant immune system against cancer. In this case, the same growth signals of the capybaras are sequestered by the cancer cells promoting their own growth and proliferation. d) Amy Boddy (University of California, Santa Barbara), argues that during mammal’s pregnancy, the placenta acts as a fetal tissue, which invades the mother's womb, promoting the proliferation of blood vessels and suppressing the maternal immune system so that the mother tolerates cells from genetically different fetuses. In the same way a metastatic tumor suppresses the immune system, so genetically different cells are tolerated. Although sometimes, when a gene regulates more than one function, these can come into conflict. 

II) The theory promoted by Panos Anastasiadis (Department of Cancer Biology on Mayo Clinic's Florida), proposing a close and vital interrelation between the growth of normal cells and their corresponding brake, not existing brakes for cancer cells. For Anastasiadis, the mechanisms that maintain the growth-brake cell system in a normal or altered state reside in the intercellular junctions. He said that until recently, it was thought that these adhesion molecules functioned only as a glue that holds cells together, when in fact their real function is to generate specific structures that produce intercellular communication signals through microRNAs (expression regulators genes). When communication through these intercellular molecular structures is interrupted, tumorigenesis would occur, that’s to say  bad-regulated microRNAs would promote out-of-control cell growths. In this regard Anastasiadis said that when normal cells come into contact with another specific subset of microRNAs, they are able to suppress genes that promote cell growth. Therefore, by normalizing the function of defective microRNAs in cancer cells, it will be possible to reverse aberrant growths, reprogramming cancer cells to normal cells). The pending task here is to identify adhesion proteins that interact with microRNAs, which ultimately orchestrate complete cellular programs of simultaneous regulatory expression of a group of genes. Experimentally Anastasiadis has given microRNAs back into the cancer cells and get them effectively reprogrammed to normal cells. Anastasiadis concludes that anomalous forms of cells identified by some histopathologists as cancer, are rather the expression of defective intercellular adhesions, not always being evidence of malignancy. III) Finally, there is currently great progress in the rapid identification of traces of tumors in small blood samples containing abnormal DNA, having created the 2016, the Grail company, with a capital of $ 1.5bn of dollars with contributions from Microsoft and Amazon to detect multiple cancers at once, before the onset of symptoms. The identified fragments known as circulating tumor DNA (ctDNA) are eliminated by cancer cells. Although there are still problems of sensitivity and specificity, high costs and quantity of blood to be extracted, the goal is to detect 10 cancers at a time. There is confidence because it is now possible to scan very small fragments and identify those few with alterations that may indicate cancer. Other companies on this flight are Epigenomic, Guardant Health, Breathomics, Owlstone Medical.

Labels: , , , ,

Saturday, February 16, 2019

DEEP MIND





IT IS THE STRUCTURE and the TYPE OF COMPUTER


In an updated paper about the current possibilities of creating a functional artificial brain similar to the human brain, the weaknesses and strengths of such efforts are exposed in Science. The analysis begins with the question: ¿Can  machines think?, self-answered by Alan Turing, in 1950, who alluded  to: a) A mathematical negative formulated by Kurt Gödel and Church, Kleene, Boater and Turing, referred to digital computers, which despite having an infinite capacity, such machines in front of certain things or questions would not respond or would do it wrongly. Penrose suggested (1980,1994), that certain human brain molecular structures could adopt a state of quantum superposition and entanglement (more so, if electrons transiting through human neural circuits do so in ionic forms), paving the way for the future use of artificial quantum computers in order to equal human brain performance. b) Returning to Turing, he closed his point of view   postulating by that time (1950), that for machines to start thinking like a human brain the important thing was to imitate the complex biological neural computational systems, having as a guide the human neural circuits, imitation that to the present time  has made possible the creation of neural circuits systems imitating the cerebral cortex (deep network), constructed with successive layers of elements similar to neurons connected by artificial synapses, producing speech recognitions, complex games, translation of texts, computer vision, classification and segmentation of objects, capture of images, where someone  try to produce a short verbal description of an image, answer visual questions and of human communication about the content of an image, or non-visual tasks: analyze humor and sarcasm, comprehension and intuitive aspects of social things, serve as assistants persons, in medical diagnosis, automatic car handling. Despite this, there are problems to be solved: I) To improve the adjustment of learning through synapses to produce desired output patterns, conditioned by training at the inputs. II) Achieve a learning with deep artificial neural networks that go beyond simple memorization producing logical outputs, not necessarily programmed during the learning process. III) In this perspective, highlights the incorporation into AI (artificial intelligence), associated to deep neural artificial networks of the called: learning reinforcement: LR (mapping of situations or actions to maximize the signal of reward or reinforcement, not to take certain actions, but to discover through trial, error and reward the best option in order to modify the behavior). LR models combined with AI algorithms, are currently applied to video games, Go and Chess, reaching in this last level of world champions with only 4 hours of training. IV) However, the most notable differences between the biological circuit and artificial neural networks systems are those based on structure: biological neurons are complex and diverse in morphology, physiology and neurochemistry. The entrances to excitatory pyramidal neurons are distributed over very complex dendritic branches. Cortical inhibitory neurons exhibit different functions, none of these heterogeneities being included in artificial neural networks.  Biological cerebral cortical circuits are more complex than models of artificial neural networks, including lateral connectivity between neurons, as well as local connections, more extensive connections and connections up and down in the hierarchical cortical regions. V) It is expected that artificial neural networks will promote a real human understanding, in order to address broad aspects of cognition and general artificial intelligence (IGA). Meanwhile, these techniques continue to be perfected under the guidance of neuroscience. VI) There are other functional differences between biological and artificial systems: A) AI's current artificial models rest heavily on the empirical side using simple and uniform artificial neural network structures employing large sets of training data for learning. Biological systems carry out tasks with limited training, learning about pre-existing network structures already encoded in circuits before learning, with which insects, fish and pigeons, perform complex navigation tasks using part of an elaborate set of innate mechanisms with sophisticated computational capabilities. B) Therefore, the development of complex cognitive and perceptual activities in children with little training, in the first months of their lives is possible, recognizing they complex instruments such as their hands, following people with their gazes  and distinguish visually if the characteristics of certain animals are dangerous or not, while developing an incipient understanding of physical or social interactions, through unsupervised learning, given the presence of innate cognitive systems generated by evolution, which facilitated the acquisition of significant concepts and skills. Recent models of visual learning in childhood, show that significant and complex concepts are not innate or learned by the child, but are proto-concepts that provide signals of internal teaching guiding the learning system along pathways that lead to a progressive acquisition and organization of complex concepts with little or no explicit training. Sometimes, a particular pattern of moving images provides an internal signal for the recognition of their hands that helps them to manipulate objects guiding the learning system in the direction of their gaze. Innate structures implemented in cortical regions with specified connectivity warn initially of specific input errors. Perhaps in the future, these pre-existing structures could be coupled to artificial neural models to simulate human learning. Imagine computational learning methods starting from proto-concepts with structures inserted in humans or robots that learn to quickly become familiar with unknown environments in an efficient and flexible way, very different from the current learning procedures. Summing up: I) Following Shimon Ullman, we believe that each machine or robot of the future that must possess a human-like brain should it be  virginally inserted a basic code -not to obey commands- but to complete what it should be (or do), before each new situation or environment, using analogies, logic or emerging thinking solutions. II) Today's hyper-super-computers have no future for this purpose because: a') They use artificial neuron-artificial neuron transmissions, using electrons that circulate through metallic means. a'') Human neurons send messages to other neurons using ions that are more adapted to quantum models (entanglements and others), allowing almost simultaneous transmissions in all possible planes, including feedback-type ones. Although current computers transmit information in several planes, they lack artificial-totalizing organizer neurons, which to be functional should be spherical or pyramidal. b') Although the human brain allows the circulation of 20% of total human body blood at every moment, it does not get very hot, because the membranes of its neurons are covered by a fat content resistor that allows them to capture electrons from the environment and adapt them to ions that allow the transmission of the message to millions of other neurons . b'') You have to copy this resistor model and include it in a quantum computer. c') The problem of reduced space of the artificial brain is solved in quantum computers using artificial neurons with spherical or pyramidal shapes capable (or almost) of dealing with thousands or millions of other artificial neurons.

Labels: , , , , ,

Sunday, February 10, 2019

DARK MATTER, DARK ENERGY and BLACK HOLES


CHALLENGES ON THE BORDERS OF SCIENCE and, HOW TO RESOLVE THEM.

1) Identify problems, choose the main problem (the one that explains the rest). Develop a hypothesis and test it. 2) Analyze the results, assessing the evidence. 3) New and improved hypothesis. Questions and cross-questions to the maximum. Incorporation of contributions and ideas from other scientific areas. New hypothesis and so on, again and again. Let's see 2 cases:





I) First case. Recognized as an interstellar object ("Oumuamua"), by the International Astronomical Union, this entity discovered in October 2017, supposedly natural, has been questioned by the astronomer Abraham Loeb (Harvard University, Astronomy Deparment), given the pretty bizarre features of this, to be an asteroid or a comet of our solar system : a) when passing close to the earth the entity rotated on itself  every 8 hours b) it has a brightness that changes by a factor of 10 c) It  is 5 times longer than wide d) It does not emit much heat e) It  is a good reflector of light, but poorly conservative of it f) It  does not absorb light well g) not much is known about its surface because it has not been possible to take a clear close image of its surface h) The entity  follows a path not governed by the sun's gravity  j) The entity  exhibits an extra force that drives it. Such kind of an object with unusual characteristics and limited data,  has forced Loeb to propose several alternatives for it,   instead of accepting it exclusively as a natural entity, once again dividing an elite of world scientists: a minority that argues that possessing  such unusual features, one possibility  -within several- is that it is an extraterrestrial probe launched by an intelligent civilization, and another represented by scientists not so open minded to ensure that it is a natural object and nothing more. An interdict on the very frontier of science, whose arguments as always bring us back to Plato and dialectics and whose final solution will once again be determined by the evidence. A phenomenon like this with such unusual characteristics can be anything, until it is proven otherwise. 





From quantamagazine

II) The other case corresponds to the enigmatic subject of matter and dark energy and black holes, conforming 95% of our visible universe, for which the theoretical astrophysics of Yale University: Priyamvada Natarajan, has proposed to study them coherently, using various methods including the mapping of them. The central theoretical idea of the ​​Indian astrophysics is to suppose that supermassive black holes are a fundamental part of the structure, the energetic area and the evolution of our universe, due to being located in the center of galaxies, determining the form and other characteristics of the galaxies. A very difficult subject to study located on the same frontier of science. How to demonstrate this theory if the mentioned phenomena are characterized by their invisibility. To face this challenge, the Natarajan, argues to have the stamina, mental clarity and above all know what she is   doing aided by mathematical approaches, quite necessary in this case. 1) She says that one of the first things to do is to understand why all the current physics collapses when we are close to a black hole? auto answering she says:” you need a lot of mathematical support”. 2) She  also argues that there is a certain relationship or interweaving between the mass of the stars in the central part of the galaxies and the central area of ​​the black holes that host them, needing here approximations of theoretical physics, to know how to join the framework  and early growths of these phenomena, simultaneously. 3) Regarding the mapping, the Natarajan argues that the Hubble telescope has provided incredible images to map and analyze dark matter, offering indirect images of this matter when estimating the extension and curvature of the light emitted by different distant galaxies. Adding: "But, not everything depends on the Hubble images. My maps need mathematical orientations especially when small clumps of dark matter are identified, which correlated perfectly with the cold, non-interactive dark matter". In images taken by the Hubble telescope of the cluster galaxy: MACS0416 and of the same image with superimposed blue distribution of dark matter, inferred from the distortion of light from distant galaxies (the background ones), it is observed how supermassive black holes could have initially formed at the center of galaxies. The scientist hopes that with larger telescopes it will be possible to identify bright quasars (luminous galactic centers), supplied by billions of solar masses and black holes, when the universe was only 10% of its current age. Part of the above is supported by the Natarajan in 2 articles (2005/2006), in which when lecturing on "massive seeds",  she raised the extraordinary idea that it is possible to bypass the formation of a star forming instead  seeds  of massive black holes, of approximately 10,000 to 1 million times the mass of the sun, able to explain the emergence of quasars in very early times of our universe. 4) On the spiral forms of galaxies, Natarajan argues:In the same way that when you pull the plug of the bathtub, the water forms a vortex, something similar happens in the early universe with the formation of gas discs, quickly positioning the gas siphon in the center". This direct collapse of "seeds of black holes" is part of a larger cosmic evolutionary history, subsequent to the generation of a population of black holes: how they form, evolve, turn into quasars, go out and shine until today. 5) The idea of ​​the direct collapse of supermassive black holes (quasars of early times of the universe, fed by supermassive black holes), is today, a leading theory in the formation of the early universe, consensuated by successive small pieces of evidence. To support the above, in a computer simulation, the Natarajan successfully programmed the direct collapse of black holes: populated the early universe and propagated the growth of these galaxies until today. 6) The noted Hindu scientist, hopes that the James Webb Space telescope to be launched in 2021, observe the space deeply and backward in time, paying attention to the formation of galaxies of the early universe, facts that will test the idea of early collapse of the Natarajan. The James Webb telescope is able to see images of dark matter more accurately. Premonitorily, the Natarajan affirms that if the new telescope identifies quasars in the first epochs of the universe, these will have to be black holes with direct collapse.

DESAFÍOS EN LAS   FRONTERAS   DE LA   CIENCIA y, COMO RESOLVERLOS.

1) Identificar problemas. Escoger el problema principal (el que explique al   resto de problemas). Elaborar una hipótesis y someterla a prueba. 2) Analizar los resultados, valorando las evidencias. 3) Nueva hipótesis, mejorada. Preguntas y repreguntas al máximo. Incorporación de aportes e ideas de otras áreas científicas. Nueva hipótesis y así, una y otra vez.  Veamos 2 casos:

I) Reconocido como un objeto interestelar (“Oumuamua”), por la International Astronomical Union, este ente descubierto en octubre del 2017, supuestamente natural, ha sido  puesto en entredicho por el astrónomo Abraham Loeb (Harvard University, Astronomy Deparment), dadas   las  características bastante bizarras de  este,  para ser un asteroide o un cometa de nuestro  sistema solar,  el mismo  que : a) al pasar   cerca de la tierra rotaba sobre si cada 8 horas b) que posee una brillantez que cambia por un factor de 10 c)  que es 5 veces más largo que  ancho d) que no emite mucho calor e) que es un buen reflector de la luz, pero mal conservador de ella f) que no absorbe  bien la luz g) del que no se conoce  mucho sobre su  superficie porque no se ha podido   tomarle una imagen cercana nítida h) que sigue un trayecto no gobernado por la gravedad del sol i) que no es influenciado por la gravedad del sol   j)  que exhibe una fuerza extra que lo impulsa. Un objeto con características desusuales y  con  datos limitados,  que ha obligado a  Loeb a  proponer  para este objeto  varias alternativas, en vez de aceptarlo exclusivamente como un ente natural,  dividiendo una vez más a una elite de   científicos mundiales: una minoría que arguye  que al poseer  características tan  desusuales, una posibilidad dentro de varias, es que  se trate de una  sonda extraterrestre lanzada por una civilización inteligente y,  otra representada por científicos de  mente no tan abierta que  aseguran que se trata de un objeto natural y nada más. Un entredicho en la misma frontera de la ciencia, cuyos argumentos como siempre nos retrotraen a Platón y a los dialecticos y cuya solución final una vez más será determinada por las evidencias. Un fenómeno como este con características tan desusuales puede ser cualquier cosa, hasta que no se demuestre lo contrario, claro. II) El otro caso corresponde al enigmático asunto de la materia y energía oscura y los agujeros negros, conformantes del 95% de nuestro universo visible, para los cuales la astrofísica teórica de la Universidad de Yale:  Priyamvada Natarajan, se ha propuesto   estudiarlos coherentemente, empleando diversos métodos incluyendo el mapeo de los mismos.  La idea teórica central de la astrofísica hindú es suponer que agujeros negros supermasivos son parte fundamental de la estructura, el área energética y la evolución de nuestro universo, en razón de estar   ubicados en el centro de las galaxias, determinando la forma y otras características de las galaxias.  Un tema bastante difícil de estudiar ubicado en la misma frontera de la ciencia. Como demostrar esta teoría si los fenómenos mencionados se caracterizan por su invisibilidad. Para afrontar este desafío, la Natarajan, arguye disponer de la estamina, la claridad mental y sobre todo saber lo que está haciendo ayudada por aproximaciones matemáticas bastante   necesarias en este caso. 1) Dice ella, que una de las primeras cosas por hacer es entender ¿porque toda la física actual colapsa cuando estamos cerca de un agujero negro?, auto respondiéndose que, para responder a esta pregunta, necesita mucho soporte matemático. 2) Arguye, asimismo, que existe cierta relación o entrelazamiento, entre la masa de las estrellas en la parte central de las galaxias y el área central de los agujeros negros que las hospedan, necesitando aquí aproximaciones de física teórica, para saber cómo   unir estructural y tempranamente los crecimientos de estos fenómenos, en forma simultánea. 3) Respecto al mapeo, la Natarajan arguye que el telescopio Hubble, ha proporcionado imágenes increíbles para mapear y analizar la materia oscura, ofreciendo imágenes indirectas de esta materia al estimar la extensión y curvatura de la luz emitida por diferentes galaxias lejanas. Agregando: “No todo depende de las imágenes del Hubble, mis mapas necesitan orientaciones matemáticas especialmente cuando se identifican grupos pequeños de materia oscura, correlacionados a la perfección con la materia oscura fría, de tipo no interactivo”. En imágenes tomadas por el telescopio Hubble de la galaxia en racimo: MACS0416 y de la misma imagen con distribución en azul superpuesta de la materia oscura, inferida a partir de la distorsión de la luz procedente de galaxias distantes (las de fondo), se observa como podrían haberse formado inicialmente agujeros negros supermasivos en el centro de las galaxias. La científica espera que con telescopios más grandes se logren identificar cuásares brillantes (centros galácticos luminosos), abastecidos por billones de masas solares y agujeros negros, cuando el universo tenía apenas un   10 % de su edad actual.  Parte de lo anterior es sustentado por la Natarajan en 2 artículos (2005/2006), en los que al disertar sobre “semillas masivas”, plantea la extraordinaria idea de que es posible  bypasear la formación de una estrella formando en su lugar  semillas de  agujeros negros masivos, de aproximadamente   10,000 a 1 millón de veces la masa del sol, capaces de explicar la emergencia de cuásares en épocas muy tempranas de nuestro universo. 4) Sobre las formas espirales de las galaxias, la Natarajan arguye:” Del mismo modo que cuando usted jala el tapón de la bañera, el agua forma un vórtice, algo similar sucede en el universo temprano con la formación de discos de gas, posicionándose rápidamente el sifón de gas en el centro”. Este   colapso directo de “semillas de agujeros negros” es parte de un historial evolutivo cósmico más grande, subsiguiente a la generación de una población de agujeros negros:  se forman, evolucionan, se tornan en cuásares, se apagan y brillan hasta hoy.  5) La idea del colapso directo de los agujeros negros supermasivos (cuásares de épocas tempranas del universo, alimentadas por agujeros negros supermasivos), es hoy, una teoría líder en la formación del universo temprano, consensuada por   sucesivos pequeños trozos de evidencia. Para sustentar lo anterior, en una simulación en computadora, la Natarajan programo el colapso directo de agujeros negros, pobló el universo temprano y propago el crecimiento de estas galaxias hasta hoy. 6)  La notable científica hindu, espera que el telescopio James Webb Space a ser lanzado el 2021, observe     el espacio profundamente y hacia atrás en el tiempo, prestando atención a la formación de galaxias del universo temprano, hechos que pondrán a prueba la idea del colapso   temprano de la Natarajan. El telescopio James Webb está capacitado para ver imágenes de la materia oscura con mayor precisión. Premonitoriamente, la Natarajan afirma que el nuevo telescopio identifica   cuásares en las primeras épocas del universo, estos tendrán que ser   agujeros negros con   colapso directo. 

Labels: , , , ,