![]() An explanation will be offered of Moravec's Paradox i.e. It will be argued that if phenomenology is the hallmark of the mental, then talk of "digital minds" is anthropomorphic. What can they do? Yet since epiphenomena by definition are causally impotent, presumably they lack the causal efficacy to inspire treatises on their existence. Some AI researchers dismiss our teeming diversity of qualia - the philosopher's term of art for the myriad "raw feels" of consciousness - as epiphenomena and hence functionally irrelevant to intelligence. On theoretical grounds it will be argued that no robot or digital computer with a von Neumann architecture, or using massively parallel "neurally inspired" classical computation, will be non-trivially conscious, nor run software that is non-trivially conscious, nor emulate the computational power of the human mind/brain. "Watson") nor do researchers know how any kind of sentience may be "programmed" in future. ENIAC) nor are they conscious subjects in 2011 ( cf. On some fairly modest philosophical assumptions, digital computers were not subjects of experience in 1946 ( cf. Despite the exponential growth of transistors on a microchip, the soaring clock speed of microprocessors, the growth in computing power measured in MIPS, the dramatically falling costs of manufacturing transistors and the plunging price of dynamic RAM (etc), any chart plotting the growth rate in digital sentience shows neither exponential growth, nor linear growth, but no progress at all. One metric of progress in AI remains stubbornly unchanged. Fortunately, futurology based on extrapolation has pitfalls. The greatest underlying source of global catastrophic and existential risk this century lies, not in super-AGI, but rather the competitive dominance behaviour of human male primates using "narrow AI" to pursue what evolution biologically "designed" male hunter-warriors to do i.e. After reviewing the chain of inferences supporting this scenario, it will be concluded that the risks of primitive human malevolence trump machine indifference. The Singularity Institute exists to avert this outcome and promote " friendly AI". Most likely this singleton super-AGI will be non-friendly to humans. ![]() By contrast, Yudkowsky draws on the analogy of a chain of nuclear fissions gone critical to prophesy an uncontrolled positive feedback cycle that goes "FOOM", leading to a singleton super-AGI. Kurzweil foresees an ecosystem of enhanced biological humans, human-machine hybrids, digital "uploads" and super-AGIs. Next, the contrasting empirical claims and background assumptions of two pivotal figures in the modern Singularity movement, Ray Kurzweil ("The Singularity is Near") and Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence (SIAI) will be critically examined. Is "g" real or a statistical artefact? A distinction will be drawn between "mind-blind", autistic intelligence, as measured by IQ tests, and the perspective-taking, mind-reading, "Machiavellian" intelligence that enabled one species of social primate to dominate life on Earth. ![]() What is the nature of recursive self-improvement in the absence of a self to be improved? Will the pleasure-pain axis of carbon-based life be superseded by the formal "utility functions" of non-biological machines? An ecologically naturalistic account of intelligence will be constructed. Moore's Law and Kurzweil's "Law of Accelerating Returns" are reviewed. ![]() Good (1965), and computer scientist Vernor Vinge (1983) to their recent eruption into popular culture: "2045, The Year Man becomes Immortal" ( Time magazine, Feb. The history and multiple meanings of the terms " Singularity" and "Intelligence Explosion" will be charted from their origins in the work of Hungarian polymath John von Neumann (1950), the mathematician I.J. The structure of this essay will be as follows.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |