Support independent publishing: buy this book on Lulu.

Friday, August 31, 2007


From Top/Down: 1) Mouse and supermouse 2) Supermuscles/Supermouse (Taken from Der Spiegel) 3) German child with absence producer gene of Myostatin (Taken from NEJM)

I) The first one is related with the possible functional sloweness of stem cells related to generation of mice’s skeletal muscle cells, causing a blockade in the production of the protein Myostatin, in charge of inhibiting the normal growth of skeletal muscle. 10 years ago, McPherron AC, Lawler AM and Se-Jin Lee (John Hopkins University/Baltimore/Maryland. 1997: Regulation of skeletal muscle mass in mice by a new TGF-ß superfamily member. Nature 387: 83–90.), created the first mutant mouse (without gene, producer of myostatin), with double muscular mass than normal mice. Now, elevating the levels of another protein: Follistatin (in mice with congenital absence of the gene of the myostatin), S.J. Lee has cuadrupleted the muscular size of mice that of another side, shine normal. These super-mice, exhibit 73% of its individual fibers in excess that of another side, are 117%, but big. As SJ Lee indicated, these knowledge will be useful to treat muscular dystrophies and related neuromuscular conditions, with damaged muscles and nerves, to create lean and bigger volume meat livestock, to improve muscles of older adults, etc. But, now that are carried out clinical rehearsals with drugs that block myostatin and elevate follistatin levels, the topic foresee future methods of sport doping, in elite athletes. With such muscles it will be possible to run 100 m race in 7 or, 8 seconds, improve marks of marathon, cycling, swimming and permit to shape super-bodybuilders. Fact that will not avoid in the future, that somebody give birth to mutant childrens (without myostatin), non-detected by antidoping controls. It is already known the existence of 2 children (one German and another American), with supermuscles and congenital absence of the gene producer of myostatin.

II) The second news, is related with the fast normalization of blood sugar’s levels (98% of cases in less than 1 month), in patient with Diabetes type II -including insuline dependent- subjected to Roux in Y gastric bypass : RYGB (duodenum and more than 90% of stomach, extirpation). A team of scientists led by Francesco Rubino of the Catholic University of Rome, arrived to these conclusions after subjected to surgery to 7 diabetic type II, patients with and without overweight. Not a new study, because one knows other bigger : 22,094 patients, in those complete reversion of Diabetes Type II, was demonstrated in 84% of the cases, after being subjected to RYGB. Some hypotheses try to explain the phenomenon: 1) after the bypass, a persistent increment of glucagon-like (1 GLP-1, peptide normally in charge of diminishing the levels of blood sugar), would exist, 2) a persistent decrease of ghrelin would exist, a peptide that stimulates the appetite 3) when being eliminated the duodenum, the resistance to the action of the insulin, would be eliminated. Now that somebody offer similar although less aggressive surgeries to diabetics, we object the method, insisting in the realization of studies but detailed of the new physiologic rearrangements (hormonal and others), after the surgery, aspiring to manage the levels of sugar with rationality and medications. The bypass reduces the size of the stomach in but of 90% (usually able of expanding up to 1000 ml), being hardly 16 ml of residual capacity, with which the patient perceived (after the ingestion of small quantities of foods), stomach fullness, growing satiety and/or indifference to foods. Of another side, the small intestine converted (anatomical and physiologically), in a neo-stomach, is to the long unable to support the new functions, developing malabsorption of fatty, starches (on which germ act causing bad odor), minerals, soluble fat-vitamins and others.


I) La primera se relaciona con el posible enlentecimiento funcional de células madre asociadas a la generación de células de músculo esquelético de ratones, originando un bloqueo en la producción de la proteina Miostatina, encargada de inhibir el crecimiento normal del músculo esquelético. Hace 10 años, McPerron AC, Lawler AM and SJLee y (John Hopkins University/Baltimore/Maryland), crearon el primer ratón mutante (sin gene productor de miostatina), con el doble de masa muscular que lo usual. Ahora, elevando los niveles de otra proteina : Folistatina (que normalmente inhibe a la hormona FSH) -en ratones con ausencia del gene de la miostatina- Lee ha cuadriplicado el tamaño muscular de ratones, que de otro lado, lucen normales. Estos super-ratones, exhiben un 73% de fibras individuales en exceso, que de otro lado, son 117%, mas grandes. Como lo indica Lee, estos conocimientos seran utiles para tratar distrofias musculares y condiciones neuromusculares relacionadas, degeneradoras de musculos y nervios, crear ganado de carne magra y mayor volumen, mejorar los musculos de adultos mayores, etc. Empero, ahora que se realizan ensayos clinicos con drogas que bloquean la miostatina y elevan los niveles de folistatina, el tema discurre por futuros metodos de dopaje deportivo, en atletas de elite. Con tales musculos sera posible correr los 100 m planos en 7 u, 8 segundos, mejorar marcas de carreras de fondo, ciclismo, natación y permitir la gesta de super-fisicoculturistas. Lo que no exhime que en el futuro, alguien geste ex-profeso niños mutantes (sin miostatina), indetectables a los controles antidoping. Se sabe ya, de la existencia de 2 niños (uno alemán y otro americano), supermusculosos, con ausencia congénita del gen productor de la miostatina.

II) La segunda noticia, se relaciona con la rápida normalización de los niveles de azucar sanguineo (98% de casos en menos de 1 mes), en pacientes con Diabetes tipo II -incluyendo insulino-dependientes- sometidos a bypass gastricos/Roux en Y : RYGB (extirpación de duodeno y más del 90% del estómago). Un equipo de cientificos liderados por Francesco Rubino de la Universidad Católica de Roma, arribó a estas conclusiones trás someter a cirugía a 7 diabéticos tipo II, con y sin sobrepeso. Estudio no tan nuevo porque se conoce otro más grande (22,094 pacientes), en los que se demostró reversión completa de Diabetes tipo II en 84% de los casos, tras ser sometidos a RYGB. Algunas hipótesis intentan explicar el fenómeno : 1) después del bypass, existiria un incremento persistente de glucagon-like peptide 1 GLP-1), secretado por celulas intestinales, encargado normalmente de disminuir los niveles de azucar sanguineo 2) existiria una disminución persistente de la ghrelina un petido que estimula el apetito 3) al eliminarse el duodeno, se eliminaria la resistencia a la accion de la insulina, Ahora que se intenta ofertar cirugías similares aunque menos agresivas a diabeticos, nosotros objetamos el metodo, insistiendo en la realización de estudios mas detallados del reacomodo fisiológico (hormonal y otros), tras la cirugía, aspirando a manejar los niveles de azúcar con racionalidad y medicamentos. El bypass reduce el tamaño del estomago en mas del 90% (normalmente capaz de dilatarse hasta 1000 ml), quedando apenas 16 ml de capacidad residual, con lo que el paciente tras la ingesta de pequeñas cantidades de alimentos acusa plenitud estomacal, saciedad creciente y/o indiferencia a los alimentos. De otro lado, el intestino delgado convertido anatómica y funcionalmente en un neoestómago, es incapaz de soportar a la larga nuevas funciones, desarrollando malabsorcion de grasas, almidones (sobre las que actuan bacterias causando mal olor), minerales, vitaminas solubles en grasa y otros.


Thursday, August 30, 2007


Common people make daily decisions (including the most important in their life), in different way like one makes scientifically (reasoned calculation of variables and probabilities by means of computers). Semi-ignorant people pressed by limited time, shortage of knowledge and/or computational capacities, uses the rule of thumb, a quick, not planned system that induces to act to people, by means of certain environmental guides. The method allows to predict –for example- with success, the size of a city, being based in the number of universities, inter-cities trains or the success of its soccer team. Gerd Gigerenzer, Director of Human Development of the Institute Max Planck of Berlin, defines this quality like intuition, a system that takes advantage of the synthesis capacities, of the human brain, accumulated with the time, experience and evolution. Its basic characteristic is that of among many ecological variables, it always chooses only one (the best), discarding the remaining ones, for unnecessary. Gigerenzer argues that the rule of the thumb frequently overcomes the wisdom of experts, adding that intuitions are also used in science (qualitative jumps), be that exist or not, enough data.

Gigerenzer also said that most of people are not equipped to manage uncertainties, or to take calculated risks, although strategies exist for it. But that to deny the uncertainty in many aspects of our life, the common citizen should recognize his lack of knowledge of principles of statistical thought (numeric illiteracy according to John Alan Paulos). Our recent last history was friend of certainty. Algebra, Geometry and Calculus taught us to think in a world of certainty. Some doctors say "If I tell to my patients that the treatment is not 100 sure%, they become nervous". In fact, professionals should provide all relevant information (pros and cons). According to some surveys, so much in USA and Europe, if a 40 year-old woman is informed that her mammography is positive for cancer, she as much as her doctors don't have clear if the possibilities to have cancer are: 99%, 95%, 90% or single 50%. The possibility that a woman with cancer (diagnosed previously), have a positive mammography, is 90%. If she doesn't have cancer, the probability that their mammography is positive (false positive), is 9%. In technical terms her chance of having cancer is: 1%. according to Gigerenzer, when this woman asks : Do I have cancer?, a 1/3 of doctors of experience respond: you has 90%, of possibilities, other 1/3: 50%-80% and the remaining 1/3: among 1% -10%. Gigerenzer recommends to explain this with natural frequencies (like in our daily life, that demand less calculation): Think in a sample of 100 women. 1 woman (1% of the sample),that has proved cancer has 90% of possibilities that her mammography be positive. 9, of the 99 remaining women (without cancer), will have positive mammographies (9 falses positives). Adding: 10 of 100 women, will have positive mammographies. 1 of each 10 positive mammographies (of this sample), will belong to a woman with cancer, a quite low probability (1% of the sample).


La gente común toma decisiones diariamente (incluyendo las más importantes de su vida), en forma diferente a como se hace científicamente (calculo razonado de variables y probabilidades mediante computadoras). El lego o, semiignorante presionado por tiempo limitado, escasez de conocimientos y/o capacidades computacionales emplea la regla del pulgar, un sistema rapido, no planificado, que induce a actuar a las personas, mediante ciertas guias medioambientales. El metodo permite predecir con éxito, el tamaño de una ciudad, basandose en el numero de universidades, trenes inter-ciudades o. éxito de su equipo de futbol. Gerd Gigerenzer, Director de Desarrollo Humano del Instituto
Max Planck de Berlin, define esta cualidad como intuición, un sistema que aprovecha las capacidades de síntesis, del cerebro humano, acumuladas con el tiempo, experiencia y evolucion. Su caracteristica basica es que de entre muchas variables ecologicas, siempre escoge una (la mejor), descartando las restantes, por innecesarias. Gigerenzer arguye que la regla del pulgar supera frecuentemente la sabiduría de los expertos, agregando que en ciencia tambien se emplean intuiciones (saltos cualitativos), sea que existan o no, datos suficientes.

Gigerenzer aduce tambien que la mayoria de personas no estan equipadas para manejar incertidumbres, o tomar riesgos calculados, a pesar que existen estrategias para ello. Mas que negar la incertidumbre en muchos aspectos de nuestra vida, el ciudadano comun deberia reconocer su falta de conocimiento de principios de pensamiento estadistico (analfabetismo numerico según John Alan Paulos). Nuestra reciente historia pasada fue amiga de la certeza y la certidumbre. El algebra, la geometría y el calculo nos enseñaron a pensar en un mundo de certeza. Algunos medicos dicen “si le digo a mis pacientes que el tratamiento no es 100% seguro, se ponen nerviosos”. En realidad, todos los profesionales deberian proporcionar toda la información (pros y contras). Según algunas encuestas, tanto en USA como en Europa, si a una mujer de 40 años le informan que su mamografía es positiva para cancer, tanto como sus medicos no tienen claro si las posibilidades de tener cancer son : 99%, 95%, 90% o solo 50%. La posibilidad de que una mujer con cancer (diagnosticada previamente), tenga una mamografía positiva, es 90%. Si no tiene
cancer, la probabilidad de que su mamografía sea positiva (falsos positivos), es 9 %. En terminos tecnicos su chance de tener cancer es : 1%. Según Gigerenzer, cuando esta mujer pregunto ¿Tengo cancer?, 1/3 de medicos de experiencia respondieron : tiene 90%, de posibilidades, otro 1/3: 50%-80% y el 1/3 restante : entre 1% - 10%. Gigerenzer recomienda explicar esto con frecuencias naturales (como en nuestra vida diaria, que demandan menos computación): piense en una muestra de 100 mujeres. 1 (1% de la muestra), de ellas con cancer de mama probado, tiene 90% de posibilidades de que su mamografía sea positiva. 9, de las 99 mujeres restantes (sin cancer), tendran mamografías positivas (9 falsos positivos). Sumando : 10 de 100 mujeres, tendran mamografías positivas. 1 de cada 10 mamografias positivas (de esta muestra), pertenecera a una mujer con cancer, una probabilidad bastante baja (1% de la muestra).


Sunday, August 26, 2007


The recent birth of quadruplets (of among 1,07-1,33 kgrs, by caesarian section, in good health, of Canadian mother of 35 years old, without ingestion of drugs of fertility, in the Benefis Hospital, Montana/USA, 8 weeks before the waited), give food for certain opinions: I) The first thing is related with the maximum number of embryos and fetuses to be harbored in human uterine cavities of size average (7,5 cm of long, 2,5 cm of thickness, area = 18,75 cm2). A normal uterus, is capable of housing about 1000 embryos of among 15-20 days of gestation and about 24 to 30 embryos of 1 month (0,6 cm of long). As from there the quick growth of embryos and fetuses surpass the insufficient uterine expansion, the problems begin. According to historical medical records, the maximum number of fetuses and newly born (NB), of multiple childbirths to term, has been 10 and 15, respectively. Nevertheless, but there of quintuplets, most of NBs, live few days, many are born dead, with underweight (up to 320 gr), mental retarded, hyaline lung membranes disease, cerebral palsy, etc. This way, the restrictive factor of healthy multiple childbirths (up to 15 or, more each time), resides in the scarce uterine expansion. Considering that each woman emit 400 mature ova along her life, each woman potentially could have 2 big gestations of 200 children each time, during 2 separate childbirths, along her life. For it the embryos would be extracted to the 15 days, for Caesarean surgery, to be inserted in 200 subrogated mothers' uteruses.
II) The other possibility is to insert many embryos in gigantic endometriums built with stem cells (created before, outside of human beings). Alternatingly, artificially uteruses similar to the humans would be built, with controllers chips of: foods, oxygen, appropriate temperature, etc. In Brave New World, Aldous Huxley, conceived human uteruses of enormous size, suspended in the walls of fertilizing rooms harboring each one : 1000 human embryos. III) That 1 or 2 NBs, be always born healthy makes us understand that this number was established by natural selection. NBs, of big head (almost 30% of their body), curved on themselves, with an average weight of 3 kgs, receive from their maternal uteruses the best feeding, best possible space and protection IV) Finally, the fact that all canadian quadruplets are women doesn't necessarily establish that they are identical monozygótics (identical genetic material, being the 4, born of 1 single egg), as the whole press said. It is possible for them, other combinations (mixtures of fraternal: genetically different and identical: genetically identical). The childbirths of identical quadruplets are strange (1/13 millions), because in these cases the fertilized original egg is divided in 2 daughters cells, then once but to give triplets and finally once but to give quadruplets. Multiple births happen in a natural way, as a result of a better nutrition, extended life span, post infertility treatments (35%, when IVF is used, because several embryos are implanted), use of fertilizing drugs, in young and women bigger than 35 years.


El reciente nacimiento de cuatrillizas (de entre 1,07-1,33 kgrs, por cesarea, en buena condición, de madre canadiense de 35 años, sin drogas de fertilidad de por medio, en el Benefis Hospital, Montana/USA, 8 semanas antes de lo esperado), dá pábulo a ciertos juicios: I) Lo primero se relaciona con el número máximo de embriones y fetos a ser albergados en cavidades uterinas humanas de tamaño promedio (7,5 de largo, 2,5 cm de espesor, area= 18,75 cm2). Un útero normal, es capáz de alojar unos 1000 embriones de entre 15-20 dias de gestación y a unos 24 a 30 embriones de 1 mes (0,6 cm de largo). Como de alli en adelante el acelerado crecimiento de embriones y fetos sobrepasa a la insuficiente expansión uterina, empiezan los problemas. Según, los registros historicos médicos, el número máximo de fetos y recien nacidos (RN), de partos múltiples a término, ha sido 10 y 15, respectivamente. No obstante, mas alla de los quintuples, la mayoria de RNs, viven pocos dias, nacen muertos, con peso bajo (hasta 320 gr), retardo mental, enfermedad de membranas hialinas pulmonares, parálisis cerebral, etc. Asi, el factor limitante de partos múltiples saludables (hasta 15 o, más por véz), radica en la insuficiente expansión uterina. Considerando que las mujeres emiten 400 óvulos maduros a lo largo de su vida, estas potencialmente podrian tener 2 grandes gestaciones de 200 hijos por véz, durante 2 partos separados, a lo largo de su vida. Para ello los embriones serian extraidos a los 15 dias, por cesárea, para ser insertados en úteros de 200 madres subrogadas.

II) La otra posibilidad es insertar embriones en gigantescos endometrios (creados ex profeso-fuera de los seres humanos), construidos con celulas madre. Alternativamente se construirian úteros artificíales semejantes a los humanos, acondicionados con chips controladores de :alimentos, oxigeno, temperatura adecuada, etc. En Brave World, Aldous Huxley, concibió úteros humanos de enorme tamaño, suspendidos en paredes albergando cada uno, 1000 embriones humanos. III) Que 1 o 2 RNs, nazcan siempre saludables nos hace entender que este número fué condicionado por la selección natural. RNs, de cabeza grande (casi un 30% de su cuerpo), flexionados sobre si mismos, con un peso promedio de 3 kgs, reciben de sus úteros maternos el la mejor alimentación, el mejor espacio posible y la mejor protección IV) Finalmente, el que las cuatrillizas, sean mujeres no establece necesariamente que sean monozigóticas idénticas (material genético idéntico, naciendo las 4, de 1 solo huevo), como toda la prensa lo dice. Son posibles para ellas, otras combinaciones (mezcla de fraternas : genéticamente diferentes e idénticas :genéticamente idénticas). Los partos de cuatrillizas idénticas son rarisimas (1/13 millones), porque en estos casos el huevo original fertilizado se divide dando lugar a 2 células hijas, luego una vez mas para dar trillizos y finalmente una vez mas para dar cuatrillizos. Los nacimientos múltiples ocurren de forma natural, como resultado de una mejor nutrición, esperanzas de vida mayores, post tratamientos de infertilidad (35%, cuando se emplea IVF, porque se implantan varios embriones), drogas fertilizantes, en jóvenes y, mujeres mayores de 35 años.


Saturday, August 25, 2007


In conversations maintained with older, healthful men of 80 years but, as opposed to the question if they continue sexually active (normal coitus, oral sex or others, in the last 12 months), always we have received an affirmative answer, although as it is obvious with diminished monthly frequencies. Others, with limitations of health, argue that in spite of the handicap, desire never dies. It is well known the history of an illustrious greek, that on the brink of 90 years and fragile health said : “Finally, I have gotten rid of this tyranny”. At the present time some older people continue sexually active with certain aids (Viagra and women for older men. For the women: young men, vaginal lubricants and feminine hormones of vegetal origin) and exotic products that folk people advocates and history registers.

Tessler Lindau et al, after studying (by means of questionnaires, biometric evaluations and auxiliary tests), the activity, conduct and sexual problems of 3005 American adults (men and women of 57-85 years), reports in NEJM, that 26% of men of 85 years stay sexually active, being their greater limitant factors : bad health (urogenital infections, diabetes, cancer) and difficulties in the erection. A 73% of people (men and women between 57-64 years), declared to stay sexually active, like the 53% of people between 65-74, years. For women was difficult to maintain a continuous sexual pair until advanced age. The main problems of women are: diminished desire (43%), lack of vaginal lubrication (39%), absence of climax (34%), depression, social isolation, ingested medicines, etc. Very few people trusted their doctors, their sexual difficulties. Men and women with good health were but the active ones sexually and vice versa. Masturbation is practiced until advanced ages.


Friday, August 24, 2007


I) Until beginnings of 1900, the time was conceived like an unidirectional, irreversible and immutable arrow: past-present-future. But in 1905 (Special Theory of Relativity), Einstein proposed that the time interval between 2 events was linked to the speed whereupon moved the witnesses (2 observers with different speeds will experience different durations of time, before the same event, because they use an inertial reference system. The comparison of spaces and times between inertial observers is made by means of transformations of Lorentz. If A (one of the twins : A and B), travels in a rocket at the speed of the light towards a near star, while B remains in Earth, when A returns to Earth (after 10 years), he will be 10 years but young (dilation of time. It happens because A, travelled to the future). According to Einstein the objects that travel at the speed of the light, experiment slowly but, the passage of time. The trips to the future have been demonstrated in subatomic particles that travel at the speed of the light. The muons (with average life defined, that travel in colliders), decay slowly but that the habitual thing. It is possible to travel to the future at great speeds, although also by action of the gravity. II) When since 1940, it began to appear evidences of the origins of the Universe, the time began to being considered a variable but within a system of coordinates, being accepted that few nanoseconds after the Big Bang, the time contained to the interior of the Big Bang, was strongly employee of the other variables.

III) In the General Theory of Relativity (1915), Einstein, proposed that space/time could curve itself by means of gravity internal fields generated by masses very heavy, able to drag with himself to near space-time. In the General theory of Relativity, like in other metric theories of gravitation, the space/time is treated like a Lorentzian variety of 4 dimensions that curves by the presence of mass and energy. Based on it, Kip S. Thorne (1980), proposed a model of machine of the time, using worm holes. A worm hole would permit roundtrips space-time trips. According to Thorne, to travel by worm holes, it is needed exotic matter (antigravity), to fight the natural tendency of a massive system to become a black hole. If one of the mouths of the worm hole is connected to a neutron star, the gravity of this will slow the time until 30%. Before the critics that blackholes only would permit single outward journeys, Paul Davies (2001, How to build a Time Machine), proposed to construct a machine using a worm hole, interconnected to 2 black holes, with doors towards the past and future. Sthepen Hsu and Roman Ruby of the University of Oregon/USA, have questioned the viability of the worm holes, because according to his calculations the travellers could appear in any place and time, given the instability of warm holes. The important thing of these contributions is that they opened the doors to trips towards the past. Already from 1948, Kurt Goedel of the Institute for Advanced Study in Princeton, N.J., had elaborated one solution to the equations of gravitation´s field of Einstein, that described a rotating universe, in which an astronaut could travel in closed loops in the space reaching his own past, because the gravity affects the light. Now Amos Ori (2005, Technion, Israel Institute of Tecnology, Haifa), has proposed a machine of time based on gravitation´s fields (potentially able to evolve towards a machine of the time, whenever the process will not be destroyed by instability and works in spaces/times, but delayed that the moment of its construction), that does not require exotic matter, nor black holes and single uses the existing vacuum in the space, to travel through time. According to Ori, it is possible to travel to the past, throughout local gravitational curvatures of the space/time, twisted on themselves like a doughnut (torus), of arbitrary size. Some scientists doubt about the capacity of the machine of Ori to surpass the instability of the internal compact vacuum. Davies, argues that a suddenly and great increase of energy within the doughnut would destroy it, reason why he prefers to use black holes, for being but stable. IV) The trips towards the past confront the paradoxes of travelers that kills their grandfathers and prevents his own birth, being answered by quantum mechanics that supposes the existence of parallel universes (multiverses, with several alternate possibilities: death/no death, there is sea/there is no sea, there are 2 suns/there is 1 sun, etc).


I) Hasta inicios de 1900, el tiempo era concebido como una flecha unidireccional, irreversible e inmutable : pasado-presente-futuro. Pero en 1905 (Teoria Especial de la Relatividad), Einstein propuso que el intervalo de tiempo entre 2 eventos dependia de la velocidad con que se movian los testigos (2 observadores con velocidades distintas experimentarian duraciones de tiempo diferentes, ante el mismo evento -porque emplean un sistema de referencia, inercial. La comparación de espacios y tiempos entre observadores inerciales se realiza mediante transformaciones de Lorentz. Si A (uno de los gemelos A y B), viaja en un cohete a la velocidad de la luz hacia una estrella cercana, mientras B permanece en Tierra, cuando A regrese (despues de 10 años terrestres), sera 10 años mas joven (Dilatación del tiempo. Ello sucede porque A, viajó al futuro). Según Einstein los objetos que viajan a la velocidad de la luz, experimentan mas lentamente, el paso del tiempo. Los viajes al futuro han sido demostrados en particulas subatomicas que viajan a la velocidad de la luz. Los muones (con vida media definida, que viajan en colisionadores), se descomponen mas lentamente que lo habitual. Se puede viajar al futuro a grandes velocidades, aunque también por accion de la gravedad. II) A partir de 1940 cuando se empezaron a tener evidencias de los origenes del Universo, el tiempo empezó a ser considerado una variable mas dentro de un sistema de coordenadas, aceptándose que pocos nanosegundos después del Big Bang, el tiempo contenido al interior del Big Bang, era fuertemente dependiente de las otras variables.

III) En la Teoria General de la Relatividad (1915), Einstein, propuso que el espacio/tiempo podia curvarse mediante campos de gravedad internos generados por masas muy pesadas, capaces de arrastrar consigo al espacio-tiempo próximo. En la teoria General, como en otras teorias metricas de la gravitacion, el espacio/tiempo es tratado como una variedad Lorentziana de 4 dimensiones que se curva por la presencia de masa y energía. Basado en ello, Kip S. Thorne (1980), propuso un modelo de máquina del tiempo, empleando agujeros de gusano. Un agujero de gusano permitiria viajes de ida y retorno. Segun Thorne, para viajar por agujeros de gusano, se debe disponer de materia exotica generadora de antigravedad para combatir la tendencia natural de un sistema masivo de convertirse en un agujero negro. Si una de las bocas del agujero de gusano es conectada a una estrella de neutrones, la gravedad de esta enlenteceria el tiempo hasta en un 30%. Ante las criticas de que los agujeros negros solo permitirian viajes de ida, Paul Davies (2001, How to build a time Machine), propuso construir una maquina del tiempo empleando agujeros de gusano, interconectados a 2 agujeros negros, con puertas hacia el pasado y futuro. Sthepen Hsu y Roman Rubi de la Universidad de Oregon/USA, han cuestionado la viabilidad de los agujeros de gusano, porque según sus calculos los viajeros podrian aparecer en cualquier lugar y epoca, dada la inestabilidad de los mismos. Lo importante de estas contribuciones es que abrieron las puertas a viajes hacia el pasado. Ya desde 1948, Kurt Goedel del Institute for Advanced Study in Princeton, N.J., habia elaborado una soluccion a las ecuaciones del campo gravitacional de Einstein, que describian un universo rotante, en el cual un astronauta podia viajar en asa cerrada en el espacio, hasta alcanzar su propio pasado, porque la gravedad afecta la luz. Ahora Amos Ori (2005, Technion, Israel Institute of Tecnology, Haifa), ha propuesto una maquina del tiempo basada en campos gravitacionales (potencialmente capaz de evolucionar hacia una maquina del tiempo, siempre que el proceso no sea destruido por inestabilidad y funcione en espacios/tiempos, mas tardios que el momento de su construccion), que no requiere materia exotica, ni agujeros negros y solo emplea el vacio existente en el espacio para viajar a través del tiempo. Según Ori, es posible viajar al pasado, a lo largo de curvaturas gravitacionales locales del espacio/tiempo, torcidas sobre si mismas semejantes a una rosquilla (torus), de tamaño arbitrario. Algunos cientificos dudan de la capacidad de la máquina de Ori para superar la inestabilidad del vacío compacto interno. Davies, arguye que un súbito y gran aumento de la energía dentro de la rosquilla la destruiría, por lo que mejor seria emplear agujeros negros, por ser mas estables. IV) Los viajes hacia el pasado confrontan las paradojas del viajero que mata a su abuelo e impide su propio nacimiento, respondidas por la mecanica cuantica que supone la existencia de universos paralelos (multiversos, con otras posibilidades : muerte/no muere, hay mar/no hay mar, hay 2 soles/no hay ninguno, etc).

Labels: ,

Tuesday, August 21, 2007


I) We finished now the cycle around the future of the humanity, analyzing Nick Bostrom´s main ideas (Oxford Future of Humanity Institute), who argues that present evidences are insufficient to guarantee a predetermined future towards desirable goals (progress or advanced organisms with mind, conscience, language and reason). The evolution towards the present human beings forms could have been single question of luck and nothing guarantees similar future trends. Bostrom uses 2 scenes and a conclusion: II) If nothing were against, the biotechnology could develop post-humans (human with expanded optimal capacities thanks to advances in neuropharmacology, artificial intelligence -AI-, cybernetics, nanotechnology, etc, guided or restricted by ethical considerations), in the next years. Antivirals drugs, better diagnoses, vaccines and others can be developed optimizing our health. Certain acquired physical characteristics, could be transmitted to next generations by means of genetic engineering. Superintelligent machines could be in charge to generate inventions according to necessity, in as much they can be more effective than human brain in all aspects: thought, analysis and technological creativity. Biotechnological advances would permit to the Homo sapiens to evolve towards new optimal physical dispositions (fitness), favored by an increasing economy (with 9 billions of inhabitants around 2050, the world will 7 times but rich than that today). Thanks to it, will be favored fitness eudaemonic types (but the suitable ones, by biotechnological means), creating niches to them, maximizing their goals and disrupting social structures of development of types non-eudaemonic. By means of use of the biometry, modern supersurveillance and supervisions, the physical reality will be controlled as never seen before. Nanotechnology will be useful in manufactures, medicine and computation. Virtual environment will expand our fraction of experience. Remarkable advance will be made in prediction of future markets. Our Biology altered by the biotechnology will allow us to control human senescence (lives prolonged up to 1000 years), retarding the accumulating frequencies of cells damages or by means of removals of the accumulated cell wastage. The stem cells would be used to replace the extinguished ones. Enzymes were developed to destroy useless and lethal substances. There would be advances in the control of cerebral circuits of well-being and pleasure like cognitive improvements. Isolated cognitive modules of AI, will be connected to other non biological modules with incorporated (upload), human minds , communicating among them making the process more economic and productive. It would be possible to assmilate instantaneously any kind of modules (Arithmetic, decision making, fast learning, etc), being created post-humans beings. When AI equals to the human one, it will surpass it just a short time later. Nonbiological humans (supercomputers), will be created, which will be uploaded with human brains. Parts of brains will be gradually replaced by chips. Maps in 3D will be created. Neuronal networks of human brains will be faithfully reconstructed in computers. A human brain uploaded into it will have long life, without aging. Copies of these brains will be made for greater security. Increases of speed of thought in a computer, could made but fast the subjective experience (1 hour, instead of 1 year). The creation of vast states of conscience (virtual), will made the supposition that we wil be inhabiting certain worlds or other simulations created in computer by post-humans or extinct humans. The colonization of the space will be economic using intelligent machines. With time the resources of all the accessible universe, will conform a spherical usable infrastructure (“Computronium”), with the Erath in the center.

III) The other perspective is that no advances were made, because of a catastrophic event: sudden extinction of human species (impacts at random of meteors and asteroids, pandemics, natural disasters, supervolcano eruptions). Although in this aspect, the worse risks are anthropogénic associated almost always to future technological developments (nanotechnology molecular outpost, pathogens to design, future nuclear weapons, experiments of physics of high energies and AI with bad intentions). In this heading will be very important the existencial risks, conceived like which could cause the death or cut potential development of intelligent life in our planet. John Leslie has considered this probability in a 30% (based on “Doomsday argument”) and Bostrom in a 20%. Considering that our species has survived volcano eruptions, meteor impact and several other natural disasters by thousands of years, the greater existencial risks will be those born of the own human activity : nuclear pumps, pathogenic germs, physics of high energy colliders or new virus mutant,easily transmitted (flu) or lethal (HIV). The actions of some superintelligent machines could decide the future of humanity. Some forms of valuable life, could be extinguished because of competitive conducts. IV) The only way to avoid these extinction (if they represent a predetermined trajectory), is to assume a control of the evolution, which will require a unique power (singleton), without external competitors, equipped with mechanisms sufficiently integrated to solve problems of internal coordination. The long term control of the evolution will require global coordination. Singleton could take a variety of forms, without being necessarily a monolithic culture or a brilliant mind. Singleton will permit the existence of an ample rank of forms of life including non-eudaemonic goals. Singleton would not favor the creation of agents with nonfavorable values to the humans. Singleton could also conduct itself in form of global, permanent opresive regimes or, dictatorships without competitors.


I) Terminamos el ciclo en torno al futuro de la humanidad, analizando ideas de Nick Bostrom (Oxford Future of Humanity Institute), quien arguye que las evidencias actuales son insuficientes para garantizar un futuro predeterminado hacia metas deseables (progreso hacia organismos mas avanzados en mente, conciencia, lenguaje y razón). La evolución hacia las formas humanas actuales podria haber sido solo cuestión de suerte y nada garantiza tendencias futuras similares. Bostrom esgrime 2 escenarios y un colofón: II) Si nada se opusiese, la biotecnologia podria gestar post-humanos (humanos con capacidades óptimas expandidas merced a avances en neurofarmacologia, inteligencia artificial -AI-, cibernética, nanotecnologia, etc, guiadas o constreñidas por consideraciones éticas), en los próximos años. Se desarrollarían mejores medios diagnósticos, vacunas y drogas antivirales optimizando nuestra salud. Ciertos rasgos fisicos adquiridos, podrian ser trasmitidos a las siguientes generaciones mediante ingenieria genética. Máquinas superinteligentes estarian encargadas de generar inventos según necesidad, en tanto más efectivas que el cerebro humano en todos los aspectos : pensamiento, análisis y creatividad tecnológica. Adelantos biotecnológicos le permitirian al Homo sapiens evolucionar hacia nuevas disposiciones fisicas óptimas (fitness), favorecidas por una economia creciente (con 9 billones de habitantes, el 2050, el mundo seria 7 veces mas rico que hoy). Gracias a ello, se favorecerian fitness eudaemonic (los mas adecuados, mediante medios biotecnológicos), creándoles nichos maximizadotes de sus metas y estructuras sociales entorpecedoras del desarrollo de tipos non-eudaemonic. Mediante el empleo de la biometría, supervigilancia y supervisiónes modernas, la realidad fisica seria controlada de modo nunca visto antes. La nanotecnologia será util en manufacturas, medicina y computación. Los medios ambientes virtuales expandirán nuestra fracción de experiencia. Avance notable será la predicción de mercados futuros. Nuestra biología alterada por la bio-tecnologia nos permitirá controlar la senescencia (vidas prolongada hasta 1000 años), mediante enlentecimientos de las frecuencias acumuladoras de daños elulres o mediante remociones de los acumulado tras el daño.. Las celulas madre seran usadas para hacer recrecer a las extinguidas. Se desarrollaran enzimas que destruirán sustancias inutiles y letales. Habrian avances en el control de los circuitos cerebrales de bienestar y placer al igual que mejoras cognitivas. Modulos cognitivos aislados de AI, serán conectados a otros modulos con mentes humanas incorporadas artificialmente, comunicándose entre ellas haciendo los procesos mas económicos y productivos. Sera poible entonces, asimilar instantaneamente modulos de aritmetica, de toma de decisiones, de aprendizaje rapido, etc, creandose post-humanos. Cuando la AI iguale a la humana, la superará poco tiempo después. Se crearan humanos no biológicos (podrian ser supercomputadoras), a los que se les incorporará (upload), cerebros humanos. Partes del cerebro seran gradualmente reemplazadas por chips. Se crearan mapas en 3D, de redes neuronales de cerebros humanos, reconstruyendolas fielmente en computadoras. Un cerebro humano incorporado asi, podria tener larga vida, sin envejecer. Se tendrian copias de estos cerebros, para mayor seguridad. Incrementos de velocidad de pensamiento en una computadora, harian mas rapida la experiencia subjetiva (1 hora, en lugar de 1 año). La creación de vastos estados de conciencia harian suponer que muchos de nosotros estariamos habitando simulaciónes de computadora creada por post-humanos o humanos extintos. La colonización del espacio será económica empleando máquinas inteligentes. Con el tiempo los recursos de todo el universo accesible, conformarán una infraestructura aprovechable esférica (“Computronium”), con su centro en la Tierra.

III) La otra perspectiva es que ninguno de estos adelantos se realice, a causa de un evento catastrófico : extinción súbita de la especie humana (impactos al azar de meteoros y asteroides, pandemias, desastres astrofisicos, erupciones de supervolcanes). Aunque en este aspecto, los peores riesgos sean los antropogénicos, asociados a desarrollos tecnológicos futuros (nanotecnologia molecular avanzada, patógenos a diseño, futuras armas nucleares, experimentos de fisica de altas energias y AI con malos propósitos). En este rubro importan los riesgos existenciales, concebidos como aquellos que podrian causar la muerte o cortar el potencial de desarrollo de vida inteligente en nuestro planeta. John Leslie ha estimado esta probabilidad en un 30% (basado en el “Doomsday argument”) y Bostrom en un 20%. Considerando que nuestra especie ha sobrevivido erupciones de volcanes, impacto de meteoros y otros por miles de años, los mayores riesgos existenciales seran aquellos nacidos de la propia actividad humana : bombas nucleares, patógenos, colisiones de fisica de alta energia, o nuevos virus mutantes biotecnológicos, mutantes, de facil contagio (influenza) o letalidad (HIV). Las acciones de algunas máquinas superinteligentes podrian decidir el futuro de la humanidad. Algunas formas de vida valiosa, podrian extinguirse a causa de conductas competitivas.

IV) El único modo de evitar estas extinciones (si representan una trayectoria predeterminada), es asumir un control de la evolución, lo que requeriria un poder único (singleton), sin competidores externos, dotado de mecanismos suficientemente integrados para resolver problemas de coordinación interna. El control a largo plazo de la evolución requeriria una coordinación global. Un singleton podria tomar una variedad de formas, sin ser necesariamente una cultura monolitica o una mente genial. El singleton permitiria la existencia de un amplio rango de formas de vida incluyendo metas non-eudaemonic. El singleton no favoreceria la creación de agentes con valores no favorables a los humanos. El singleton no obstante podria conducirse en forma de regimenes opresivos globales, permanentes o, dictaduras sin competidores.


Sunday, August 19, 2007


We will comment today aspects of national and international aid and other 2 items associated to natural phenomena, without coherent explanations. I) It is prominent the daily presence of Alan García in the place of the earthquake, identifying, designing and prioritizing strategies, in order to solve to the brevity and with efficiency the problems but urgent. Thanks to the solidarity and international aid (water, foods, money, removal of brashes, survivors' search), of continuous presence in Pisco, Cañete, Ica and Chincha. Vital the presence of the colombian President Alvaro Uribe and their team (In Pisco/Peru), contributing with their own experiences in reconstruction of cities affected by similar earthquakes. Although the most remarkable thing is the infinite solidarity of adolescents and Peruvian young adults (one that has never seen), ordering and channeling donations indefatigably. Apart from this: they donated a lot of blood. Praiseworthy the disposition of the Peruvian private sector, providing material and strategic help. Important García's disposition to calm spirits and to take measures tending to counteract the emergency of criminal acts, offering tranquility to the population. Sign of the times, Peru faces like a single fist the consequences of this terrible natural disaster.

II) Among: 3,12 - 3,20 minutes of the video that we show it is possible to watch a series of lightnings, of down/up, without sound, in form of triangles with superior base, as emerging of the sea or from some sector of the horizon, to the long and wide of the country (some refer to have seen them of colors and others in Andean areas), of very short duration (1 second). These lightnings are vivid emissions of photons (electromagnetic waves composed by energized particles, generated by interactions with discharges of electrons in the clouds or, between them and the earth). Curious fact because these are common in storms (of null or strange occurrence in Lima). For Hernán Tavera of IGP (Geophysical Institute of the Peru) and Carlos Zavala, Manager of the CISMID (Center Japanese Peruvian of Seismic Investigations and Mitigation of Disasters), of Peru, the observed lightnings are attributable to close contact of air electricians cables of high tension, version with which we differ strongly, proposing next an alternating hypothesis : A) According to the descriptions (and that seen in the video), these lightnings exhibit but well likeness and similar foundations to the established for polar dawns, radiation of Cherenkov and tribuluminiscence In the case of polar dawns, the lightnings happen when ionized particles (protons and electrons) coming from the Sun (solar wind), beat atoms and molecules of very low energy level (O, N and N2), present in the magnetic field of the Earth), exciting and forcing them to emit photons (visible light). The colors depend on the type of excited atom. Because of a bigger density of atoms the phenomenon spreads to happen to smaller terraqueous heights to 500 km. B) We estimate that tribuluminiscence : generation of currents of protons and electrons (for mechanical separation of surfaces -fault lines- deformation of rocks or movements of underground waters), that condition potential differences, and ionization of atoms when ascending to the atmosphere, are capable to generate luminiscences and splendors; sustained by the american geologist John S Kerr, is a consistent theory making it our for the case of the Peruvian earthquake.

III) Finally the problem of the detection of premonitory signs. If we abide to that known, the earthquakes happen for the abrupt liberation of kinetic energy stored in the u limits of tectonic plates (for sudden slips among them), in form of destructive waves. In consequence: A) The creation of a device that perceives increments of atmospheric ionization must be a predictor to prove. B) We can also create a submarine detector (located in the limits of tectonic plates), of massive increments of kinetic energy connected by means of steel cables to marine surfaces of Nazca. C) Finally already in the land of the annulment of the seismic waves of earthquakes of small intensity, we could appeal to the phenomenon of interference of waves (if the crest of a wave coincides with the valley of the other one, the resulting vibration will be null). For it we would have to build it an artificial originator of waves.


Comentaremos hoy aspectos de la ayuda nacional e internacional y otros 2 asociados a fenómenos naturales, sin explicaciones coherentes. I) Es destacable la presencia diaria del presidente Garcia en el lugar del terremoto, identificando, diseñando y priorizando estrategias, a fin de resolver a la brevedad y eficiencia los problemas mas urgentes. Agradecer la solidaridad y ayuda internacional (agua, alimentos, dinero, remoción de escombros, búsqueda de sobrevivientes), de presencia continua en Pisco, Cañete, Ica y Chincha. Vital la presencia del presidente colombiano Alvaro Uribe y su equipo aportando experiencias propias en reconstrucción de ciudades afectadas por sismos similares. Aunque lo más notable sea la infinita solidaridad de adolescentes y adultos jóvenes peruanos (como nunca se ha visto), ordenando y canalizando infatigablemente donaciones. Por lo demas : donaron sangre a raudales. Loable la disposición del sector privado peruano, proporcionando ayuda material y estratégica. Importante la disposición de Garcia para serenar animos y disponer medidas tendientes a contrarrestar la emergencia de actos delincuenciales, brindando tranquilidad a la población. Signo de los tiempos, el Peru enfrenta como un solo puño las consecuencias de este terrible desastre natural.

II) Entre los : 3,12 - 3, 20 minutos del video que mostramos se aprecian sucesiones de luminosidades (no una, ni 2, sino multiples), de abajo hacia arriba, asónicas, en forma de triángulos con base superior, como emergiendo del mar o de algún sector del horizonte, a lo largo y ancho del pais (algunos refieren haberlos visto de colores y otros en areas andinas), de muy corta duración (1 segundo). Estas luminosidades son vividas emisiones de fotones (
ondas electromagnéticas compuestas por partículas energizadas, generadas por interacciones con descargas de electrones en las nubes o, entre ellas y la tierra). Hecho curioso porque estas, son comunes en las tormentas (de nula o rara ocurrencia en Lima). Para Hernan Tavera del IGP (Instituto Geofisico del Peru) y Carlos Zavala Director del CISMID (Centro Peruano Japonés de Investigaciones Sismicas y Mitigación de Desastres ), del Peru, las luminosidades observadas son atribuibles al roce de cables aereos eléctricos de alta tensión, versión con la que discrepamos, proponiendo a continuación una hipótesis alterna. A) Por las descripciones (y lo visto en el video), estas luminosidades exhiben mas bien semejanzas y fundamentos similares a las establecidas para las auroras polares, la radiación de Cherenkov y tribuluminiscencias. En el caso de las auroras polares, las luminosidades ocurren cuando particulas cargadas (protones y electrones) procedentes del Sol (viento solar), colisionan contra atomos y moléculas de muy bajo nivel energético (O, N y N2, presentes en el campo magnético de la Tierra), excitándolos y forzándolos a emitir fotones (luz visible). Los colores dependen del tipo de átomo excitado. A causa de una mayor densidad de atomos el fenómeno tiende a ocurrir a alturas terraqueas menores a 500 km. B) Estimamos que la tribuluminiscencia : generación de corrientes de protones y electrones (por separación mecánica de superficies -lineas de falla- deformación de rocas o movimientos de aguas subterráneas), que al ascender a la atmósfera condicionan diferencias de potencial, e ionización de atomos, generando luminiscencias y resplandores ; sostenida por el geólogo John S Kerr, es una teoria consistente haciendola nuestra para el caso del terremoto peruano .

III) Finalmente está el problema de la detección de señales premonitorias. Si nos atenemos a lo conocido, los terremotos ocurren por la liberación brusca de energia cinética almacenada en los limites de las placas tectónicas (por deslizamientos súbitos entre ellas), en forma de ondas destructivas. En consecuencia :A) La creación de un dispostivo que perciba incrementos de la ionización atmosférica seria un predictor a probar. B) Tambien se podria crear un detector submarino (ubicado en los limites de las placas tectonicas), de incrementos masivos de energia cinética conectado mediante cables de acero a superficies marinas de Nazca. C) Finalmente ya en el terreno de la anulación de las ondas sismicas de terremotos de pequeña intensidad, se podria recurrir al fenómeno de interferencia de ondas (si la cresta de una onda coincide con el valle de la otra, la vibración resultante será nula). Para ello tendriamos que construir un emisor artificial de ondas.


Friday, August 17, 2007


I) Tras examinar los efectos del reciente terremoto (15/Agosto/07), grado 7, 9, severo, según la Escala de Richter, ocurrido hace 2 dias frente a las costas sur-peruanas, con hondas repercusiones en Pisco, Chincha, Cañete, Ica, en ese orden (510 muertos, 1000 heridos, 16 669 viviendas afectadas, 200 000 damnificados), imaginaba lo primero que haria un hipotético pais vecino, caso de desencadenarse un conflicto bélico por inconformidades maritimas fronterizas. En este supuesto, si el estado agresor se decidiese por destruir de inmediato nuestros sistemas de comunicaciones (en poder por ahora de una compañía privada extranjera), lograria efectos devastadores y tendria el 50% de la guerra ganada, porque las comunicaciones continuas son la sangre vital que recorre las entrañas de los paises. Permite diagnósticos instantáneos y secuenciales de escenarios cambiantes o imprevistos (guerras, desastres naturales). Resulta que el terremoto cortó de inmediato varios segmentos de fibra óptica terrestre (ineficazmente protegida contra imprevistos), anulando comunicaciones fijas y celulares. Solo tras 2 horas, la comunicación pudo restablecerse parcialmente merced a un sistema alterno de cables de fibra óptica submarino. Empero y como lo anota JC Lujan, los blogs no sufrieron interrupciones, por la sencilla razón de que su sistema de funcionamiento es satelital (Google). Desde hace algunos años : Nigeria, Mexico, Argentina, Brasil y Chile poseen satelites para vigilar areas naturales (quizas tambien con fines belicos). Siendo el Peru un pais sismico debiese adquirir por lo menos un satelite (cuestan 11 milones de dolares y su matenimiento anual, otro tanto), obviamente a cargo del estado peruano. Segundos después del terremoto, el sistema satelital hubiese premunido de diagnósticos instantáneos (videos, graficos, audios, numero de muertos, heridos, necesidades urgentes, numero de casas colapsadas, etc.), a las autoridades gubernamentales quienes hubiesen actuado con mas criterio, celeridad y eficacia. Cuando 20 horas después, Alan Garcia y algunos de sus ministros arribaron al area del desastre, sus palabras delataban una evidente falta de información adelantada del desastre.

2) Lo otro, concierne a la ineficacia del Instituto de Defensa Civil peruano, que debiese ser reconformado (reconstruido), a la brevedad. Ya desde el terremoto de 1970, se sabia que era urgente una pormenorización a nivel nacional de casas mal diseñadas y locales congregantes de gentios (estadios, iglesias, locales escolares,etc). En 1970, el colapso de las graderias de un estadio en Chimbote, condicionó la muerte inmediata de 300 personas. ¿No eran acaso estas muertes prevenibles, uno de los objetivos principales del Instituto de Defensa Civil?. Acaso el catastro nacional sugerido, no habria identificado los peligros de la Parroquia de San Clemente de Pisco (construida con barro, cañas y enormes deficiencias estructurales), causante de 300 muertos?, por el desplome de su cupula y una pared lateral?. Lo otro, involucra al colapso de 16 619 viviendas, la mayoria construidas con material precario (adobe), con deficiencias agregadas en sus sistemas de soporte (vigas, columnas en lugares inapropiados), pertenecientes a familias de bajos ingresos económicos. La mayoria de planes nacionales de vivienda de los ultimos gobiernos peruanos, han vendido departamentos con precios oscilantes entre 15 000 a 25 000 dolares (30, 000 viviendas en 5 años), inalcanzables para extensas capas de la población peruana. En articulos anteriores hemos propuesto (en vez de estos departamentos), la venta por parte del estado de lotes urbanizados (100 a 120 M2, con luz, agua, desague y calles perfectamente delineadas), a precios oscilantes entre 1500 a 2000 dolares, justamente la cantidad de dinero que Alan Garcia ofreció ayer a cada familia damnificada por este terremoto. Tras ello, las familias irian construyendo poco a poco, sus casas con diseños proporcionados por el gobierno o colegios de ingenieros a precios adecuados o con intereses bajos. Con este sistema, un gobierno edificaria fácilmente a lo largo de 5 años, 1 millon de viviendas de material noble.

III) El terremoto desnudó tambien algunas deficiencias del Sistema Nacional de Salud Peruano. En realidad el modo como un pais resuelva sus problemas de salud frente a emergencias imprevistas es la verdadera medida de su eficacia. En la guerra asimétrica : Irak/USA, es visible (a traves de los noticieros), la atención de heridos graves en el propio lugar de los hechos por razones obvias (impedir muertes evitables por desangramientos), estabilizando heridos in situ, mediante transfusiones sanguineas u otros fluidos, instantáneamente. En Pisco, han muerto algunos peruanos por desangramiento. Pisco dista de Lima apenas un paso (1 hora por avion). Un par de hospitales de campaña (enviados desde Lima), montados en carpas hubiera solucionado los problemas mas urgentes (heridas sangrantes, traumatismos encefalocraneanos, politraumatismos, fracturas,etc). Importante porque unos 250 heridos graves fueron trasladados a Lima (24-48 horas después: muy tarde), implicando a Lima como el único centro soluccionador de este tipo de emergencias. IV) Por cierto existen mas temas por discutir (remoción veloz de escombros para rescatar sobrevivientes, aprovisionamiento adecuado de medicinas, provision de alimentos congelados, frazadas, carpas, clorinacion de aguas de pozo, galenos, veloz militarizacion de la zona para evitar saqueos, otros actos delincuenciales o, las especulaciones (especialmente el alza de pasajes), puesta en marcha de fuentes energéticas alternas, etc.

I) After examining the effects of the recent earthquake (August/15/07), degree 7.9, severe, according to Richter’s Scale), happened 2 days ago in south-Peruvian coasts, with deep repercussions in Pisco, Cañete, Ica, Cañete -in that order- (510 deaths, 1000 wounded people, 16 669 affected houses, 200 000 affected people), we imagined what would certain hypothetical neighboring country, case of triggering a warlike conflict by sea limits disputes. If this assumption were true, sure the attacking state were decided to destroy our communications systems immediately (now in hands of a foreign private company), getting devastating effects with which they surely won 50% of the war, because uninterrupted communications are now the vital blood that flows the entrails of all countries. It allows instantaneous and sequential diagnoses of changing or unexpected scenes (wars, natural disasters). It happened that the peruvian earthquake immediately cut several segments from terrestrial optical fibers (ineffectively protected against unforeseen expenses), annulling fixed and cellular communications. Single after 2 hours, the communication could recover a little, thanks to a alternate submarine optical fiber. However and as it writes down JC Lujan, blogs did not undergo interruptions, for the simple reason that its system of operation is satellital (Google). For some years: Nigeria, Mexico, Argentina, Brazil and Chile have satellites to watch natural areas (perhaps also with warlike aims). Being Peru a seismic country it must to acquire at least one satellite (cost=11 million dollars and their annual support, the same cost), obviously in charge of the Peruvian state. Seconds after the earthquake, satellital system are capable to deliver instantaneous diagnoses (videos, graphs, audio, urgent needs, number of wounded people, number of colapsed houses, etc.), to governmental authorities that with such information can act with but speed and effectiveness. When 24 hours later, Alan Garcia and some of their ministers arrived at the area of the disaster, their words exposed an evident lack of adavanced information of the disaster.

2) The other, concerns to the inefficiency of the Peruvian National Institute of Civil Defence, that had to be reconformed (reconstructed and redesigned), to the brevity. Already from the earthquake of 1970, one knows that it was needed an urgent and detailed information -at national level- of houses badly designed and locals usually full of crowds (schools, churches, sport stadiums, etc). In 1970, the collapse of some stadium’ stages in Chimbote/Peru, conditioned the immediate death of 300 people. Was not perhaps these prevenibles deaths one of the primary targets of the Institute of Civil Defence?. Perhaps the suggested national inventary, could not had identified the potential dangers of the Parish of San Clemente of Pisco (constructed with mud, canes and enormous structural deficiencies), cause of 300 deaths?, by the collapse of its dome and a sidewall. The other thing involves the collapse of 16 619 houses, mostly constructed with precarious material (mud), with deficiencies added in its systems of support (columns in unsuitable places), pertaining to families of low economic income. Most national house plans of last Peruvian governments, have constructed 30 000 houses in 5 years and sold departments with prices between 15 000 to 25 000 dollars, unattainable for extensive layers of the Peruvian population. In previous articles we have proposed (instead of these departments), the sale on the part of the state of urbanized lots (100 to 120 M2, with light, water, wastage systems, and perfectly delineated streets), to prices between 1500 to 2000 dollars, exactly the same amount of money that Alan Garcia offered yesterday to each affected family by this earthquake. After it, the families could built little by little, their houses with designs provided by the government or schools of engineers to suitable prices or with low interests. With this system, any government could easily build throughout 5 years, 1 million houses of noble material.

III) The earthquake also discovered some deficiencies of the National System of Peruvian Health. In fact the way a country solve its problems of health in unexpected emergencies is the true measurement of its effectiveness. In the asymmetric war: Irak/USA, is visible (through videos), the management of severely woundeds soldiers in the own place of the facts for obvious reasons (to prevent avoidable deaths by bleedings), stabilizing the injured in situ, by means of blood transfusions or other fluids, instantaneously. In Pisco, some Peruvians died by bleeding. Pisco is not far away of Lima (1 hour by airplane). A pair of field hospitals (sent to the brevity from Lima), mounted in carps could solved the urgent problems but (bleeding wounds, brain trauma, fractures, etc). Important fact because about 250 severely woundeds were transferred to Lima (24-48 hours later: very behind schedule), implying to Lima like the only center to solve this type of emergencies. IV) By the way another subjects exist to be discussed (quick removal of quake survivors, suitable supplying of medicines, congealed food provision, mantles, carps, clorination of well waters, physicians, quick militarization of the zone to avoid sackings or speculations (specially the rise of passages and others), put in march of alternating power sources, etc.


Thursday, August 16, 2007


Artificial intelligence (AI), is a possibility that does not have to be ignored by governments and common people, in as much as it has been established in Moore’s Law, it will emerge in the next 50 years, with immediate revolutionary consequences in political, social, economic, commercial, technological, scientific and environmental scopes. Although doubt and skepticism exists on the matter, the real thing is that machines will exceed us in processing capacity. If you don’t believe that look the enormous speed and capacity of processing of Deep Blue, already almost invincible in matches of elite chess. According to one seminal article of Nick Bostrom (whose main ideas are reproduced here), AI creation needs 3 basic elements: hardware, software and mechanisms of input/output. I) Input/output technology already exists (video cameras, audio,loudspeakers, robotic arms, etc), able to interact and to adapt to environment. With respect to the hardware it is clear that the speed (the limitant factor but), is but important that memory. The estimates of human brain processing capacity oscillate between: 100 million to 100 billions of MIPS (1 MIPS = 1 million instruction per second). Deep Blue, processes at the moment around 10 million MIPS, which implies that we are closely to reach the requirements of hardware necessary to create AI, similar to the human one. Extrapolating the historical growth of artificial hardware (Moore’s Law: exponential growth of computational capacity), we will conclude that the capacity of computacional processing is duplicated incessantly every 18 months, being expected that similar human brain processing capacity will be reached around the 2019. Around the 2050 the machines will have exceeded the human brain processing capacity II) The problem of software is but difficult to analyze in rigorous form, although we know that it can be solved. First to do is to solve how the human brain works and to copy it. We have slight knowledge of how human brains run computational mechanisms and also of early sensorial processing mechanisms. We have also good computational models of the visual area and how it works in complex cognitive visual stages.

On another side, we already know the basic algorithms of learning (and how they are modified by experience), that govern the actions of synapsis. The general structure of neuronal networks has already being mapped. We know a lot about neeuronal interconnectivity and interrelations between different cortical areas. Although we are far from understanding high levels of thought, we understand the work of some of its individual components. Retinas and cocleas made of silicone already exist, doing the same biological things that its contraparts. However to simulate a total brain requires enormous amounts of computational power (hoped be available in two decades). The desirable AI is one that can be educated and learned by experience. Finally we have molecular nanotechnology able to manufacture accurately an ample rank of atomic structures, the same one that is expected to provides us with a without precedent dominion on the structure of the matter. With nanotechnology, it is hoped to analyze congealed or vitrified human brains registering the position of each neuron, synapse and other parameters (analogous project to the Human Genome). The map equivalent to a 3D scan would serve to run virtual human cerebral simulations in up-to-date computers. From the previous context 4 consequences will be derived: 1) An AI, can be copied like any other computational program, with costs near zero, being expected will exists quickly great number of these, amplifying their initial impact 2)
Artificial AI took quickly to the creation of other machines that will exceed all human intellectual activities. If we guided ourselves by the Moore`s Law, after the 2050 humans would be incapable of competing intellectually with artificial AI 3) The technological progress in other fields will be accelerated with the arrival of AI. Will be fast improved : scientific and technological research. Philosophical thought will be indeed lead by machines. The same machines will design the following generation of AIs, until arriving to a singularity: fast technological progress, with genuine superintelligences (intelligent intellect but in any field, including scientific creativity, general wisdom and social skills) 4) Unlike other technologies, the AIs will not be simple tools, they will be agents potentially independent, able to have own initiatives and plans.

Inteligencia Artificial (AI), es una posibilidad que no debe ser ignorada por los gobiernos y publico en general, en tanto como lo establece la Ley de Moore, esta emergera de todos modos en los próximos 50 años, con consecuencias revolucionarias inmediatas en los ambitos social, politico, economico, comercial, tecnologico, cientifico y medioambiental. Aunque exista duda y escepticismo al respecto, lo real es que las maquinas nos sobrepasaran en capacidad de procesamiento. Sino echemosle una mirada a la enorme velocidad y capacidad de procesamiento de Deep Blue, ya casi invencible en torneos de ajedrez de elite. De acuerdo al articulo seminal de Nick Bostrom (de quien tomamos sus principales ideas en este articulo), la creación de AI necesita 3 elementos basicos : hardware, software y mecanismos de entrada y salida. 1) La tecnologia input/output ya existe (video camaras, audios, parlantes, brazos roboticos, etc), capaces de interactuar y adaptarse al medio ambiente. 2) Con respecto al hardware es evidente que la velocidad (el factor mas limitante), es mas importante que la memoria. Los estimados de la capacidad de procesamiento cerebral humano oscilan entre : 100 millones a 100 billones de MIPS (1 MIPS = 1 Millon de Instrucciones por segundo). Deep Blue, procesa actualmente alrededor de 10 millones de MIPS, lo que implica que estamos muy cerca de alcanzar los requerimientos de hardware necesarios para crear AI, similar a la humana. Extrapolando el crecimiento histórico del hardware artificial (Ley de Moore : crecimiento exponencial de la capacidad computacional), concluiremos que la capacidad de procesamiento computacional se duplica incesantemente cada 18 meses, esperandose que la capacidad de procesamiento del cerebro humano sea alcanzada el 2019. Alrededor del 2050 las maquinas habran sobrepasado la capacidad de procesamiento del cerebro humano. 2) El problema del software es mas dificil de analizar en forma rigurosa, aunque sabemos que puede ser soluccionado. Lo primero que hay que hacer es resolver como trabaja el cerebro humano y copiarlo. Ya se tienen nociones de como funcionan los mecanismos computacionales cerebrales y el procesamiento sensorial temprano ; se tienen buenos modelos computacionales de la corteza visual primaria y se trabaja en estadios mas complejos de la cognición visual.

De otro lado, ya se entienden los algoritmos básicos de aprendizaje modificados por la experiencia, que gobiernan las acciones de las sinapsis. La estructura general de las redes neuronales es mapeada conforme se aprende mas sobre la interconectividad neuronal y las interrelaciones entre diferentes areas corticales. Aunque se esta lejos de comprender los niveles mas altos del pensamiento, ya se entiende el trabajo de algunos componentes individuales. Ya existen retinas y cocleas de silicona, haciendo las mismas cosas que sus contrapartes biológicas. Pero simular un cerebro total requiere enormes cantidades de poder computacional (capacidad disponible en dos decadas). La AI deseable es aquella que se eduque y aprenda por experiencia. Finalmente tenemos la nanotecnologia molecular capaz de manufacturar un amplio rango de estructuras con precision atómica, la misma que se espera nos proporcione un dominio sin precedentes sobre la estructura de la materia. Con la nanotecnologia, se espera analizar cerebros humanos congelados o vitrificados registrando la posición de cada neurona, sinapsis y otros parametros relevantes. Un proyecto análogo al Genoma Humano. El mapa equivalente a un scan trimensional serviria para correr simulaciones cerebrales humanas virtuales en una computadora avanzada. Del contexto anterior emergen 4 implicancias:1) Una AI, basada en softwares, puede ser copiada al igual que cualquier otro programa computacional, con costos cercanos a zero, esperandose existan rapidamente gran numero de estas, amplificando su impacto inicial .
2) La AI artificial del mismo nivel que la humana, llevará rapidamente a la creación de máquinas que sobrepasaran las actividades intelectuales humanas. Si nos guiamos por la Ley de Moore, la capacidad cerebral humana será alcanzada el 2020. Después del 2050 los humanos seran incapaces de de competir intelectualmente con las AI artificiales 3) El progreso tecnológico en otros campos será acelerado con el arribo de la AI. Esta, acelerará la investigación cientifica, tecnológica y el pensamiento filosófico que serán conducidos mas efectivamente por maquinas. Las mismas maquinas diseñaran la siguiente generación de AIs, hasta llegar a una singularidad : progreso tecnológico rapido, con superinteligencias genuinas (intelecto mas inteligente en cualquier campo, incluyendo creatividad cientifica, sabiduría general y destrezas sociales) 4) A diferencia de otras tecnologías, las AIs no son simples herramientas, son agentes potencialmente independientes, capaces de tener iniciativas y planes propios.

Labels: ,

Wednesday, August 15, 2007


Video explanation: Transhumanist Nick Bostrom (Swewden,1973-), examines the future of humankind, and asks whether we can -- or should -- alter our fundamental nature to solve our intrinsic problems. He asks us to reconsider 3 "inevitable" features of life: 1) death 2) risk of extinction and 3) our inability to live consistently full lives. If we could, would we correct these flaws? Can we humans alter our basic nature in ways that will enhance our experiences in the world? Do we want to? No matter what your answer, Bostrom's engaging talk will force you to consider it carefully.
According to Nick Bostrom, philosopher of the University of Oxford and Director of Future of Humanity Institute, we could be living in a virtual universe, designed and manipulated by some post-humans (very advanced humans). The support to this idea happens of the hypothesis that conscious mental states can be generated in an ample variety of physical substrata (substrate-independence, theory), as long as suitable structures and computational processes settle down. While in “The Matrix”, most humans and the inhabited external world are illusions created in their brains (while their bodies lie in tanks full of liquids), the virtual beings conceived by Bostrom, lack fleshy bodies; their brains single are part of a network of computer science circuits. In “The Matrix,” it is not possible to unplug the brains of the tanks to see the real physical world. In the virtual worlds of Bostrom, math and logics are inexorable. Bostrom assumes that technological advances would build future computers (quantum, of nuclear matter or plasma), with a capacity of processing superior to all brains of the real world (within 50 years). Thanks to it, some post-humans, would create “ancestral simulations”: scenes of their evolutionary history inhabited by worlds and virtual people with completely developed nervous systems. In those worlds the real and the virtual people will be indistinguishable.

So that these worlds be viable, post-humans would have to be arranged to recreate them, with intentions of research or diversion. The hipótesis willface however some problems: 1) It will not be realized because humans did not adquire sufficient technology or because they did not arrive to post-human stage: humans were extinguished or self-destroyed 2) post-humans decided not to recreate the simulations, because they opted for other mechanisms of direct stimulation on his centers of pleasure 3) Although we do not realize we are almost living in a computer simulation (20% or but of probabilities, according to Bostrom). If these worlds were similar to the conformed in Second Life, SimCity or World of Warcraft, it will be posible control historical events. Then to respond the question of our present real world: Why God allows so much evil in our world? , would be easy to respond: because peace is boring and because in a virtual world it does not concern much like personal behaviours - because it is not real. David J. Chalmers, philosopher of the National University of Australia says that the hypothesis of Bolstrom, is not a another skeptical position, but a different metaphysical explanation from our world. Robin Hanson, an economist of the University George Mason says that next form of simulation would be advanced forms of intelligence.