61 di 68 persone hanno trovato utile la seguente recensione
Stephen E. Robbins
- Pubblicato su Amazon.com
Formato: Formato Kindle
This book is a great antidote. It is not the best antidote for it still lacks vision as to the actual depth of the problems being faced, but there is enough vision and awareness, particularly of the true state of affairs in neuroscience, to pack this antidote-pill with plenty of power nevertheless. The ill for which the pill is the antidote is the vastly optimistic speculation and over-worry of the AI community - the supposed very soon equivalence of AIs to human intelligence once we have (again, very soon) simulated the human brain, the anxiety over robot takeovers of the human race and over the robot/AI's lack of "values" as opposed to their soon-to-be massive amped-up intelligence. The list is long: Kaku (The Future of the Mind), Barrat (Our Final Invention), Armstrong (Smarter Than Us), Muehlhauser (Facing the Intelligence Explosion), Kurzweil (How to Build a Mind) , and many more. This book makes them look, well, questionable at best.
Marcus and Freeman, the editors/contributors, set the tone in the intro: A brain with over 85 billion neurons, where there are perhaps 1000 different neuron types, each with different physical and electrical characteristics, each with functions of which we know nothing about. Overarching this already vast scope of discovery: "...we have yet to discover many of the organizing principles that govern all that complexity...we are still shaky on fundamentals like how the brain stores memories..." And worse, "...all agree that the most foundational properties of neural computation have yet to be discovered." On the deck are huge initiatives - the Obama BRAIN initiative, the European human brain project and more - and new techniques and methods - optical tracing in neurons, genetic techniques, the ability to record thousands of neurons simultaneously and more. Many of the contributors discuss these new initiatives and technologies, the rate of progress they envision, the obstacles, the limitations. The others are focused on the deep and massive problem the initiatives and new technologies both engender and face: Confronted with an enormous mass of neural data re connections, firings, frequencies, response strength, etc., perhaps on the order of zetabytes for even short recordings of brain function, how does one discover within this data the organizing principles governing the brain? How, as Shenoy notes, do we avoid "drowning in the data?"
The problem is enormous. As one contributor illustrates it, it is like trying to understand how a laptop computer functions via tracing its connections and modeling these over time - when we are not even aware of the existence of something called software! Shenoy's reliance on "levels of abstraction" for analysis (where for a computer, software is one "level") sounds nice, but the role of "software" in the case of the brain is in our correct understanding of, or theory of, firstly, perception, i.e., an understanding of the origin of the image of the external world (our experience) to include its "qualia." This (stated in terms of the origin of our image of the external world) is the more correct statement of the hard problem - Chalmers' version, stated only in terms of the origin of "qualia," has been misleading. It is foundational; it is a current mystery. Yet without an understanding of perception (experience), we cannot begin to have a theory of memory, i.e., of the "storage" of this experience, IF experience is even stored in the brain - and this theoretical chasm is why we are "still shaky" on this fundamental, namely the storing of memories. This understanding is a must-have to guide our analysis of the mass of neural data to come. This in turn cascades into how cognition, thought and language work (as all is based upon this experience and its retrieval) and beyond. The lack of this appreciation hides in areas of the book. Eliasmith pictures his "Spaun" neural model of the brain with arrow-connected boxes - visual input, information encoding, transform calcs, reward evaluation, action selection, action output. Nowhere is there a clue how the goings-on in the boxes become my image of the external world - watching my hand stirring a cup of coffee in the kitchen - yes, my experience. And endemic to neuroscience - nowhere is there an acknowledgement that in perceiving such an event, the neural mass is in fact responding to a mass of environmental information involving invariance laws - radial flow fields over the coffee surface, adiabatic invariants (a ratio of energy of oscillation to frequency) in the periodic motion of the spoon existing over haptic flows, texture gradients supporting size constancy, inertial tensors defining the wielding of the spoon, flow fields defining even the cup's form, and on, and all comprising a prior level of theoretical effort (still vastly incomplete) essential to making any sense of the neural data, and all yet irrelevant, as far as I have ever been able to discover in the literature, to the neuroscientists.
Describing the depth of this theoretical problem (the real stand-in for the "software" problem) is a weakness in the book. But enough hints are there. Freeman notes that the function of V2 (a visual area), despite massive data analysis, has resisted understanding for years (along with limited grasp of V3, V4, V5, etc.). Only by making a sharp theoretical guess - note, theoretical - has some partial progress for V2 recently been made. The more complex, the more massive the data, the more theory drives data analysis. And we are in a crisis of theory. Hints of the symptoms surface in the above discussion where it is noted that great supposed progress was made by Hubel and Wiesel's 1959 discovery of cells in V1 which are sensitive to the orientation and direction of lines, seeming to give the basis for the parsing of a visual scene - stirring the coffee - but the expectation of finding the logical extensions of such processing in higher visual areas has never come to fruition. In truth, perception theory itself has recognized that elements such as Hubel and Wiesel's cannot be the basis for scene recognition (see On Time, Memory and Dynamic Form, in Consciousness and Cognition (journal), 2004), i.e., one of the hitherto very basic neural assumptions about our perception of the external world is itself, well, shaky. But this only brings us back to the magnitude of the theoretical problem which precedes the analysis of a mass of neural data.
In all, however, this is a book of great interest, thought provoking, very informative on neuroscience today, on discoveries made, and action to come. Marcus' unbridled trashing of the current cognitive science neuro-favorite, namely the connectionist network (on which the mythological hopes of current AI are based), is a refreshing change. Something else, some other form of computation, as he argues, is needed. This is the great question, at least broached in the book to some degree - what is the form of computation that the brain is actually employing??? It is not what we think today. (Turing himself allowed for a "broad computation" which is not the form of computation embodied in computers or connectionist networks (same thing) or Turing Machines). Just to glimpse how different things might be, what if, as Bergson presciently envisioned (Matter and Memory, 1896), the brain's dynamics actually supports a form of modulated reconstructive wave passing through a universal, holographic field, specifying a subset of the field as an image of the external environment? Brain-computation would indeed be quite different. The final chapter, which takes us on a future look to 2064 is interesting in its assessment of the hurdles. The breakthrough waits until 2064; at least we are beyond Kurzweil's "Singularity" of 2045. But no, the hard problem will not simply "resolve," like "what is life" as they propose. It is, again, foundational. Even in the candor of this final vision, then, I think the work that remains is deeply underestimated.