Thursday, 2 May 2013

South Park and Super intelligent Machines

This is a link to a potentially interesting, but not even wrong (because it is ignorant about Capital re-switching problems) Paper (pdf) about 'the microeconomics of cognitive returns' on self-improving machines which thus become super-intelligent- (FOOM)

What philosophical problems does such speculation give rise to?

Suppose there is a single A.I. with a 'Devote x % of resources to Smartening myself' directive. Suppose further that the A.I is already operating with David Lewis 'elite eligible' ways of carving up the World along its joints- i.e. it is climbing the right hill, or, to put it another way, is tackling a problem with Bellman optimal sub-structure. Presumably, the Self-Smartening module faces a race hazard type problem in deciding whether it is smarter to devote resources to evaluating returns to smartness or to just release resources back (re-switching) to existing operations. I suppose, as part of its evolved glitch avoidance, it already internally breeds its own heuristics for Karnaugh map type pattern recognition and this would extend to spotting and side-stepping NP complete decision problems. However, if NP hard problems are like predators, there has to be a heuristic to stop the A.I avoiding them to the extent of roaming uninteresting spaces and breeding only 'Speigelman monster' or trivial or degenerate results. In other words the A.I's 'smarten yourself' Module is now doing just enough dynamic programming to justify its upkeep but not so much as to endanger its own survival. At this point it is enough for there to be some exogenous shock or random discontinuity on the morphology of the fitness landscape for (as a corollary of dynamical insufficiency under Price's equation) some sort of gender dimorphism and sexual selection to start taking place within the A.I. with speciation events and so on. However, this opens an exploit for systematic manipulation by lazy good for nothing parasites- i.e. humans- so FOOM cashes out as ...oh fuck, it's the episode of South Park with the cat saying 'O long Johnson'.
So Beenakker solution to Hempel's dillemma was wrong- http://en.wikipedia.org/wiki/Hempel's_dilemma- The boundary between physics and metaphysics is NOT the boundary between what can and what cannot be computed in the age of the universe' because South Park resolves every possible philosophical puzzle in the space of what?- well, the current upper limit is three episodes.

3 comments:

Rajiv said...

good article here- http://dspace.mit.edu/bitstream/handle/1721.1/41178/AI_WP_293.pdf?sequence=4

Anonymous said...

'Presumably, the Self-Smartening module faces a race hazard type problem in deciding whether it is smarter to devote resources to evaluating returns to smartness or to just release resources back (re-switching) to existing operations'
This is incoherent. 'Evalutaing returns to smartness' contains its own consensus term and in digital logic this can eliminate race hazard.












windwheel said...

Can doesn't mean will.