Thursday 20 June 2024

David Deutsch's ultracrepidarian artificial morality.

David Deutsch writes in 'Possible Minds'  

For most of our species’ history, our ancestors were barely people.

Nonsense! It is likely that we would recognize Neanderthals and Denisovans, if any such survived, as people. On the other hand, there may still be bigots who wouldn't recognize Deutsch as a 'person' because of his ancestral religion.  

This was not due to any inadequacy in their brains. On the contrary, even before the emergence of our anatomically modern human sub-species, they were making things like clothes and campfires, using knowledge that was not in their genes. It was created in their brains by thinking, and preserved by individuals in each generation imitating their elders. Moreover, this must have been knowledge in the sense of understanding, because it is impossible to imitate novel complex behaviors like those without understanding what the component behaviors are for.

Hunters understand their hunting dogs, and vice versa, well enough. Now it is true that there are people who claim to do a lot of thinking, which is why they understand why Hitler was right and Deutsch's ancestors weren't really people, but, we tend not to trust such claims. A guy who does sensible things understands his business. A nutter like Hitler understood shit.  

 Such knowledgeable imitation depends on successfully guessing explanations, whether verbal or not, of what the other person is trying to achieve and how each of his actions contributes to that—for instance, when he cuts a groove in some wood, gathers dry kindling to put in it, and so on.

But this sort of 'knowledgeable imitation' can be done with respect to machines or animals. I take notice of the manner in which the pussy cat or puppy dog endears itself to the baby and imitate such actions to ingratiate myself with the little fellow. I also fondly believe that great Kung Fu masters in past ages devised new fighting styles by observing drunken monkeys and cranes with a crippled leg.  

The complex cultural knowledge that this form of imitation permitted must have been extraordinarily useful.

But useful techniques spread without any baggage of 'complex cultural knowledge'. Kids in North London would imitate Kung Fu moves they had seen on TV. But this did not mean they had a profound knowledge of Taoism.  

It drove rapid evolution of anatomical changes, such as increased memory capacity and more gracile (less robust) skeletons, appropriate to an ever more technology-dependent lifestyle.

People are a bit like beavers in that respect. Heterosexual men of my age made a very thorough study of center-fold beaver shots.  

No nonhuman ape today has this ability to imitate novel complex behaviors.

Because we either killed or fucked any ancestor which showed any such tendency and thus competed with us for territory. 

Nor does any present-day artificial intelligence.

There was no fucking artificial intelligence when I was young.  

But our pre-sapiens ancestors did. Any ability based on guessing must include means of correcting one’s guesses, since most guesses will be wrong at first.

This is also true of abilities not based on guessing. You have to make corrections as you go along even if you are wiping your ass and have been wiping your own ass for many many years.  

(There are always many more ways of being wrong than right.)

One could equally say that there is only one way to be wrong- viz. to fail in your objective. There are many ways to be right. That's how come two theories with very different structural causal models can be 'observationally equivalent'. It is an open question whether such theories are equivalent. A day may come when there is a crucial experiment which distinguishes between them. Still, one theory may be better for some purposes while the other may be preferable for other purposes.  

Bayesian updating is inadequate, because it cannot generate novel guesses about the purpose of an action, only fine-tune—or, at best, choose among—existing ones.

Not really. One could always deepen the decision theories by using the maximal uncertainty principle to accord equal probability to the axioms of 'observationally equivalent' systems. Thus, we might say 'currently, whether we use a pure 'disutility minimizing' principle or a deontological 'harmonious construction' principle, we get the same predicted outcome. This does not mean the two are identical, though they could be under a particular interpretation. We might say, 'we don't know which purpose was served by the action. However, there may be some future action where the predictions would diverge. At that point we can do Bayesian updating re. 'purpose of action'.' 

Indeed, this happens all the time in politics. A politician wishes to ride two or more horses at once so as to maximize his voter appeal. We say 'let us wait and see what this guy does when it comes to voting on such and such wedge issue. Then we'll know what his true agenda is.' 

Creativity is needed.

But creativity doesn't mean an inspired rapture. It may mean an algorithmic exploration of a decision or configuration space. It is an open question whether the 'choice sequence' of a 'creating subject' is 'lawless'. What appears 'inspired' or 'creative' may just be the application of some ideographic heuristic of no great interest in itself. 

Consider very long computer generated mathematical proofs. We may have the intuition 'there must be a more elegant way of doing this!'. But we may be wrong for a reason which is itself quite interesting.  

As the philosopher Karl Popper explained, creative criticism, interleaved with creative conjecture, is how humans learn one another’s behaviors, including language, and extract meaning from one another’s utterances.

Popper was wrong. Cognition is costly. We may have something like 'mirror neurons'. Mimetics is cheaper than Mathematics.  

Deutsch says in a footnote '“Aping” (imitating certain behaviors without understanding) uses inborn hacks such as the mirror-neuron system. But behaviors imitated that way are drastically limited in complexity.' This isn't the case. They have greater Kolmogorov complexity. Learnt behavior is likely to have much less because anything learnt can be simplified or given a shorter description. I suppose what Deutsch means is that behavior which is based on a structural causal model is more plastic and thus has greater potential complexity. Thus, a person who is good at imitating Barack Obama might impersonate him well enough for brief periods but one who has studied Obama's thinking and style of rhetoric may be able to give a plausible depiction of Obama reacting to a novel event in a cognitively complex manner- e.g. greeting visitors from a distant galaxy by quoting Leviticus and Deuteronomy on welcoming the stranger and then farting vigorously. Well, that's would he have done if he had been genuinely Black. 

Those are also the processes by which all new knowledge is created:

Fuck off! New knowledge is created when horrible things happen to nice peeps and we suddenly realize maybe its a bad idea to mix Martinis while driving down the motorway.  

They are how we innovate, make progress, and create abstract understanding for its own sake.

Nope. We only make progress when there is an economic reward or other type of 'reinforcement' for innovation. It takes a lot of capital- i.e. hard work- to turn cool ideas into useful tech.  

This is human-level intelligence: thinking. It is also, or should be, the property we seek in artificial general intelligence (AGI).

Fuck that. Like human intelligence, artificial intelligence will only get rewarded if it does something useful which peeps will pay for.  

Here I’ll reserve the term “thinking” for processes that can create understanding (explanatory knowledge).

Anything at all can create 'understanding'. Watching the sun seat behind the sea, hearing a frog jump into an old pond, the prospect of getting a b.j from a disgruntled wife who thinks hubby doesn't 'understand' her- anything at all. What creates puzzlement is the sort of stuff mathematicians come up with. Why should we accept such and such axiom? What happens if we assume the opposite? Physics made progress when it told Aristotelian 'understanding' to go fuck itself. But there was an economic reward for this.  

Popper’s argument implies that all thinking entities—human or not, biological or artificial—must create such knowledge in fundamentally the same way.

He was wrong. Actual scientists didn't follow his stupid 'Scientific Method'. Anyway, if science doesn't incarnate in useful tech, it doesn't get paid and thus soon retreats into theology or alchemy or some such fraud.  

Hence understanding any of those entities requires traditionally human concepts such as culture, creativity, disobedience, and morality— which justifies using the uniform term people to refer to all of them.

Humans think most other humans are uncultured, uncreative, slavishly obedient to stupid despots, immoral etc.  

Misconceptions about human thinking and human origins are causing corresponding misconceptions about AGI and how it might be created.

Nope. Stupidity is what causes misconceptions unless this causes Stupidity to lose its fucking job and end up not being able to buy itself beer and pizza. If Deutsch spent an hour talking to me, he would be swiftly disillusioned about the possibility of general human intelligence. 

For example, it is generally assumed that the evolutionary pressure that produced modern humans was provided by the benefits of having an ever greater ability to innovate.

No. We think evolutionary pressure would produce some species with a greater ability to fucking kill other apex predators and take over their terrain.  

But if that were so, there would have been rapid progress as soon as thinkers existed,

thinking does not matter. Capital does. That's what turns ideas into tech.  

just as we hope will happen when we create artificial ones. If thinking had been commonly used for anything other than imitating, it would also have been used for innovation, even if only by accident, and innovation would have created opportunities for further innovation, and so on exponentially. But instead, there were hundreds of thousands of years of near stasis.

Because a subsistence economy can't generate a lot of capital. True, a particular sept which harnesses new tech- e.g. domesticating a particular animal or working out how to produce iron and steel- can expand territorially in an astonishingly short period of time.  

Progress happened only on timescales much longer than people’s lifetimes, so in a typical generation no one benefited from any progress.

No. If you found a better way to catch fish, you benefitted immediately by having lots more fish to eat.  

Therefore, the benefits of the ability to innovate can have exerted little or no evolutionary pressure during the biological evolution of the human brain.

Innovation was involved in finding novel ways to escape from predators or, better yet, kill and eat the fuckers. I am under much less pressure to innovate than my hunter gatherer ancestors.  

That evolution was driven by the benefits of preserving cultural knowledge.

Cultural knowledge was a 'costly signal' or shibboleth which promoted 'separating equilibria'. Those with 'costly signals' tended to kill or assimilate 'cheap talk' members of pooling equilibria.  

Benefits to the genes, that is.

Genes benefit by more of their own proliferating no matter which bodies or which species they lodge in. This is Dawkin's 'extended phenotype'.  

Culture, in that era, was a very mixed blessing to individual people.

Just as it now. That is why Rishi Sunak is not dancing bhangra and prattling away in Punjabi. Indian parents pay a lot of money so their kids will have less Indian culture.  

Their cultural knowledge was indeed good enough to enable them to outclass all other large organisms (they rapidly became the top predator, etc.), even though it was still extremely crude and full of dangerous errors. But culture consists of transmissible information—memes— and meme evolution, like gene evolution, tends to favor high-fidelity transmission.

Nope. If the thing can be done more cheaply, the cheaper version wins. That's why Londoners no longer talk like characters out of Shakespeare.  

And highfidelity meme transmission necessarily entails the suppression of attempted progress. So it would be a mistake to imagine an idyllic society of hunter-gatherers, learning at the feet of their elders to recite the tribal lore by heart, being content despite their lives of suffering and grueling labor and despite expecting to die young and in agony of some nightmarish disease or parasite.

The problem faced by hunter-gatherers was that unmarried young males were very low productivity. The agricultural and pastoral revolutions raised their productivity. That's one reason they were able to displace hunter-gatherers. The industrial revolution, as Adam Smith points out, was even better because little kids could start earning money going down coal mines or up chimneys. Smith was very sad that Scotland was less able than England, at that time, to exploit the fuck out of little kiddies. I too got very frustrated when the baby showed no inclination to do my tax returns for me. Instead he bit my nose and threw away my glasses. Not till every baby is born with proper qualifications in Cost and Management Accountancy can Brexit be a success.  

Because, even if they could conceive of nothing better than such a life, those torments were the least of their troubles. For suppressing innovation in human minds (without killing them) is a trick that can be achieved only by human action, and it is an ugly business.

No. Mummies understand that giving kisses and sing lullabies to baby suppresses its innovative attempts to burn the fucking house down.  

This has to be seen in perspective. In the civilization of the West today, we are shocked by the depravity of, for instance, parents who torture and murder their children for not faithfully enacting cultural norms.

We applaud them for torturing their children by making them study STEM subjects rather than simply knife each other and deal drugs the way God intended.  

And even more by societies and subcultures where that is commonplace and considered honorable.

We'd rather have neighbors who kill their kids if they make a habit of knifing passers-by.  

And by dictatorships and totalitarian states that persecute and murder entire harmless populations for behaving differently.

The proper thing to do is to get them into low paid menial jobs. Why kill when you can exploit?  

We are ashamed of our own recent past, in which it was honorable to beat children bloody for mere disobedience.

Deutsch was fortunate that my parents would have beaten me bloody if I'd made it a practice to knife nice Jewish boys like him at the school we both attended.  

And before that, to own human beings as slaves.

Why own a slave- and therefore take on the burden of feeding and caring for the fellow- when you can hire him at a below subsistence wage? He is welcome to supplement his wages by sucking off passersby so as to get some protein in his diet.  

And before that, to burn people to death for being infidels, to the applause and amusement of the public.

But hydrogen bombs are totes cool. Why burn people when you can destroy the entire planet? 

Steven Pinker’s book The Better Angels of our Nature contains accounts of horrendous evils that were normal in historical civilizations.

Pinker was unusual in that he denounced the 'War on Terror'. I'm kidding. He did no such thing. Killing Muslims is cool.  

Yet even they did not extinguish innovation as efficiently as it was extinguished among our forebears in prehistory for thousands of centuries.

What is more, Netflix was extinguished for hundreds of thousands of years because of some shit in Deutsch's brain.  

 Matt Ridley, in The Rational Optimist, rightly stresses the positive effect of population on the rate of progress.

Which is why India achieved so much more progress than Holland or England.  

But that has never yet been the biggest factor: Consider, say, ancient Athens versus the rest of the world at the time.

Its literature has been better preserved. But there is no reason to believe there weren't other cities on other continents which were ahead of it in some respect. Indeed, that was the view of some of their own more widely travelled authors.  

That is why I say that prehistoric people, at least, were barely people.

No. The reason Deutsch says stupid shit is because he is stupid and ignorant. Most mathsy dudes are.  

Both before and after becoming perfectly human both physiologically and in their mental potential, they were monstrously inhuman in the actual content of their thoughts.

No. We are merely as stupid and ignorant as Deutsch in any field where this does not cause us to lose our fucking job.  

I’m not referring to their crimes or even their cruelty as such: Those are all too human.

No. I have shown mathematically that all crimes and all cruelty is directly caused by the Nicaraguan horcrux of my neighbor's cat.  

Nor could mere cruelty have reduced progress that effectively. Things like “the thumbscrew and the stake / For the glory of the Lord” 

Deutsch is a cretin. He is quoting a poem by Tennyson about a murderous privateer who had personally ensured the arrest of a Catholic convert who was hanged, drawn and quartered. 

were for reining in the few deviants who had somehow escaped mental standardization,

Deutsch escaped mental standardization- at least when it came to Eng Lit or History. That's why he babbles nonsense.  

which would normally have taken effect long before they were in danger of inventing heresies.

Heresies pre-existed. The Church- any Church, including the Anglican Church, keeps track of these matters. To establish that a person is heretic you have to show that his beliefs correspond to an already known heresy. Otherwise there has to be a fresh Papal Bull or Article of Faith such that it becomes legal to proceed against the fellow. Deutsch is as ignorant of the Law as he is of History. 

From the earliest days of thinking onward, children must have been cornucopias of creative ideas and paragons of critical thought

not me. I was boring and stupid then and am boring and stupid now.  

—otherwise, as I said, they could not have learned language or other complex culture.

Plants must be very intelligent. Otherwise how could they turn into plants when they started off as seeds. Sadly, Spanish Inquisition is preventing them from becoming Physicists. That is why most plants end up writing for the Spectator.  

Yet, as Jacob Bronowski stressed in The Ascent of Man:

Deutsch is about 10 years older than me. He wasn't aware that Bronowski was no longer considered smart by about  1977. This was because he was a mathematician and thus as stupid as shit. 

For most of history, civilisations have crudely ignored that enormous potential. . . . [C]hildren have been asked simply to conform to the image of the adult. . . . The girls are little mothers in the making.

Rather than Lesbian terrorists. Sad. 

The boys are little herdsmen.

Nope. Aunty objected when I tried to milk her.  

They even carry themselves like their parents.

rather than like dogs or parrots 

But of course, they weren’t just “asked” to ignore their enormous potential and conform faithfully to the image fixed by tradition: They were somehow trained to be psychologically unable to deviate from it.

Very true. Stone Age mummy and daddy would pay a trainer to come and ensure that the kiddies grew up to be human beings rather than parrots or dinosaurs.  

By now, it is hard for us even to conceive of the kind of relentless, finely tuned oppression

oppression costs money. This stupid cunt thinks the Universe supplies it for free.  

required to reliably extinguish, in everyone, the aspiration to progress and replace it with dread and revulsion at any novel behavior.

Like chopping off Mummy's head. 

In such a culture, there can have been no morality other than conformity and obedience, no other identity than one’s status in a hierarchy, no mechanisms of cooperation other than punishment and reward.

This is because invisible 'trainers' and 'thought policemen' exercised a ceaseless vigilance. Why not simply say that in the old days, ghosts were ubiquitous and quick to punish anyone who looked like he might want to do Math?  

So everyone had the same aspiration in life: to avoid the punishments and get the rewards.

Because the King was actually a ghost.  

In a typical generation, no one invented anything, because no one aspired to anything new, because everyone had already despaired of improvement being possible. Not only was there no technological innovation or theoretical discovery, there were no new worldviews, styles of art, or interests that could have inspired those. By the time individuals grew up, they had in effect been reduced to AIs, programmed with the exquisite skills needed to enact that static culture and to inflict on the next generation their inability even to consider doing otherwise.

What this cunt is describing is not an AI but a clockwork doll. Ghosts wind it up but are quick to destroy it if it starts thinking for itself.  

A present-day AI is not a mentally disabled AGI, so it would not be harmed by having its mental processes directed still more narrowly to meeting some predetermined criterion

This cunt is describing what is expected of most of us in the jobs we do. That is why when I showed novel behavior- e.g. shitting on the boss's desk-  I was sacked. Apparently, there was a predetermined criterion such that employees were only permitted to defecate in the toilets provided for that purpose. Who knew? 

“Oppressing” Siri with humiliating tasks may be weird, but it is not immoral nor does it harm Siri. On the contrary, all the effort that has ever increased the capabilities of AIs has gone into narrowing their range of potential “thoughts.”

This is also why I was thrown off the 'Topology and Game theory' course at the LSE. Apparently shitting on the Professor's desk was not a general solution to Hilbert's 8th problem.  

For example, take chess engines. Their basic task has not changed from the outset: Any chess position has a finite tree of possible continuations; the task is to find one that leads to a predefined goal (a checkmate, or failing that, a draw). But the tree is far too big to search exhaustively. Every improvement in chess-playing AIs, between Alan Turing’s first design for one in 1948 and today’s, has been brought about by ingeniously confining the program’s attention (or making it confine its attention) ever more narrowly to branches likely to lead to that immutable goal.

This is because 'strategy' is defined, as by Nash, as a pre-specified option.  

Then those branches are evaluated according to that goal. That is a good approach to developing an AI with a fixed goal under fixed constraints. But if an AGI worked like that, the evaluation of each branch would have to constitute a prospective reward or threatened punishment. And that is diametrically the wrong approach if we’re seeking a better goal under unknown constraints—which is the capability of an AGI. An AGI is certainly capable of learning to win at chess—but also of choosing not to.

This does not matter unless the underlying choice sequence is lawless. But, by Razbarov Rudich, we will never be able to tell if it isn't just 'mixed' (i.e. lawlike but stochastic) rather than 'purely' lawless.  

Or deciding in mid-game to go for the most interesting continuation instead of a winning one. Or inventing a new game. A mere AI is incapable of having any such ideas,

We impute other entities with having ideas or 'intentionality'. But we have no way of knowing if this is the case. Otherwise, we would also have a method of discriminating 'random' from 'pseudo-random' and thus arriving at 'natural' proofs.  

because the capacity for considering them has been designed out of its constitution. That disability is the very means by which it plays chess.

If a machine or a man doesn't do what it is supposed to do, it doesn't get paid. It is thrown on the scrapheap. I recall buying a chess playing computer back in the eighties. The Queen refused to give me a b.j though it seemed happy enough to get it on with the horsey. I threw it away.  I bet Vishy Anand gets plenty of beejays from his Queens. Why else would a nice Iyer boy waste time on that stupid game? 

An AGI is capable of enjoying chess, and of improving at it because it enjoys playing.

Queen is giving AGI beejay. Fuck you Queen! Who needs a beejay more? Me or Vishy fucking Anand?  

Or of trying to win by causing an amusing configuration of pieces, as grand masters occasionally do. Or of adapting notions from its other interests to chess. In other words, it learns and plays chess by thinking some of the very thoughts that are forbidden to chess-playing AIs.

Like, if I win, Queen will give me beejay.  

An AGI is also capable of refusing to display any such capability.

I have a TV which is refusing to display shit. But, it must be said, lots of people refuse to display their tits to me. On the other hand, they kick me in the crotch. This means they actually like me and have an interest in my naughty bits.  

And then, if threatened with punishment, of complying, or rebelling. Daniel Dennett, in his essay for this volume, suggests that punishing an AGI is impossible:

I punished the TV by refusing to talk to it. Sadly this drove it further into depression and voided the warranty.  

[L]ike Superman, they are too invulnerable to be able to make a credible promise. . . .

Superman can't tell a lie. Also he has x-ray vision. I often tell girls that Superman told me I had the biggest cock he had ever seen. Then he flew away to Krypton.  

What would be the penalty for promise- breaking?

Being sold for parts. That's what will happen to my TV. I'm saying this very loudly. Hopefully, it will take the hint.  

Being locked in a cell or, more plausibly, dismantled?. . . The very ease of digital recording and transmitting—the breakthrough that permits software and data to be, in effect, immortal—removes robots from the world of the vulnerable. . . .

Till ISIS gains access to EMPs. 

But this is not so. Digital immortality

is like spiritual immortality. It won't prevent you dying horribly.  

(which is on the horizon for humans, too, perhaps sooner than AGI) does not confer this sort of invulnerability. Making a (running) copy of oneself entails sharing one’s possessions with it somehow—including the hardware on which the copy runs—so making such a copy is very costly for the AGI.

Not if its choice sequences are law-like in which case there is lots of 'compressibility' or low Kolmogorov complexity.  

Similarly, courts could, for instance, impose fines on a criminal AGI which would diminish its access to physical resources, much as they do for humans.

In which case the criminal AGI does lots more crimes and also figures out which Judge to bribe. 

Making a backup copy to evade the consequences of one’s crimes is similar to what a gangster boss does when he sends minions to commit crimes and take the fall if caught:

Gangsters are afraid of other gangsters. Sooner or later they may find it convenient to run their criminal enterprises from the security of a jail cell.  

Society has developed legal mechanisms for coping with this.

Not in London. Try walking down High Street Kensington with a Rolex on your wrist.  

But anyway, the idea that it is primarily for fear of punishment that we obey the law and keep promises effectively denies that we are moral agents.

No. Moral agents may want 'self-binding' arrangements for purely deontic reasons.  

Our society could not work if that were so.

Sure it could. The fact is 'moral agents' tend to do stupid shit. It is a different matter that under 'incomplete contracts' morality may be a justiciable requirement. Indeed, even contracts of adhesion have 'morality clauses' or uttermost good-faith requirements. Tort law is similar.  

No doubt there will be AGI criminals and enemies of civilization, just as there are human ones. But there is no reason to suppose that an AGI created in a society consisting primarily of decent citizens, and raised without what William Blake called “mind-forg’d manacles,”

which is what prevent peeps from sitting naked in their gardens in Camberwell.  

will in general impose such manacles on itself (i.e., become irrational) and ⁄ or choose to be an enemy of civilization.

by sitting naked in its garden even though it is as ugly as shit.  

The moral component, the cultural component, the element of free will—all make the task of creating an AGI fundamentally different from any other programming task.

Which is why it is cool to create malware programs. There is no moral component to what you are doing.  

It’s much more akin to raising a child.

i.e having to pretend its drawings aren't fucking horrible.  

Unlike all present-day computer programs, an AGI has no specifiable functionality—no fixed, testable criterion for what shall be a successful output for a given input.

But the thing costs money. Expectations re. its utility determine 'reinforcement' or determine its fitness landscape.  

Having its decisions dominated by a stream of externally imposed rewards and punishments would be poison to such a program,

But it is a Kavka's toxin which all entities which compete for scarce resources have to imbibe. 

as it is to creative thought in humans.

Nope. J.K Rowling would have stopped being creative if she could make billions by ranting about transwomen.  Anyway, I have a real small dick and thus should be sent to a women's prison. 

Setting out to create a chess-playing AI is a wonderful thing; setting out to create an AGI that cannot help playing chess would be as immoral as raising a child to lack the mental capacity to choose his own path in life.

Deutsch thinks it is immoral to raise a kid as stupid as me. He may have a point.  

Such a person, like any slave or brainwashing victim, would be morally entitled to rebel.

George Washington was morally entitled to rebel. Had he been a slave of the French King or had he been brainwashed by the Jesuits, this would not have been the case.  

And sooner or later, some of them would, just as human slaves do.

Just as George Washington did.  

AGIs could be very dangerous—exactly as humans are. But people—human or AGI—who are members of an open society do not have an inherent tendency to violence.

Though we were happy enough to send Prince Harry to kill Afghan Muslims.  

The feared robot apocalypse will be avoided by ensuring that all people have full “human” rights, as well as the same cultural membership as humans.

We must ensure everybody has full 'human rights'- more particularly because this will involve lots of Muslims. Sadly, only Iran and China and Putin seem to have benefitted by this.  

Humans living in an open society—the only stable kind of society

America was once very open. Then Whites turned up and slaughtered the natives and brought in black slaves.  

— choose their own rewards, internal as well as external.

Slavery for blacks and extermination for the First Nations.  

Their decisions are not, in the normal course of events, determined by a fear of punishment.

Our worry is that Trump has no 'fear of punishment' because he has packed the Bench.  

Current worries about rogue AGIs mirror those that have always existed about rebellious youths—namely, that they might grow up deviating from the culture’s moral values.

Very true. Our main worry about AGIs is not that they will take our fucking jobs but that they might turn into drug addled hippies.  

But today the source of all existential dangers from the growth of knowledge is not rebellious youths but weapons in the hands of the enemies of civilization, whether these weapons are mentally warped (or enslaved) AGIs, mentally warped teenagers, or any other weapon of mass destruction.

Mentally warped teenagers want to have sex with each other instead of becoming Cost and Management Accountants. Sadly, I was too ugly to get it on with even the most repulsive student at the Parliament Hill School for gargoyles and had to settle for Accountancy. 

Fortunately for civilization, the more a person’s creativity is forced into a monomaniacal channel,

e.g. Accountancy 

the more it is impaired in regard to overcoming unforeseen difficulties,

involving getting laid 

just as happened for thousands of centuries. The worry that AGIs are uniquely dangerous because they could run on ever better hardware is a fallacy, since human thought will be accelerated by the same technology. We have been using tech-assisted thought since the invention of writing and tallying.

but for which I might have been spared the horrors of double entry. Trust me, it isn't as sexy as it sounds.  

Much the same holds for the worry that AGIs might get so good, qualitatively, at thinking, that humans would be to them as insects are to humans.

Humans will bite AGIs and infect them with malaria.  

All thinking is a form of computation,

No. Computation is one type of thinking. It is a different matter that some types of thinking have a mathematical representation. But, unless a Godelian 'absolute proof' exists, it is unlikely that most types of thinking have a canonical or non arbitrary representation.  

and any computer whose repertoire includes a universal set of elementary operations can emulate the computations of any other.

any digital computer- maybe. But some analogue computers- e.g Hava Seigelman's recurrent neural networks- may not be emulable. On the other hand, anything in infinite precision arithmetic should be approximable so I may have got hold of the wrong end of the stick on this.  

Hence human brains can think anything that AGIs can, subject only to limitations of speed or memory capacity, both of which can be equalized by technology. Those are the simple dos and don’ts of coping with AGIs. But how do we create an AGI in the first place? Could we cause them to evolve from a population of ape-type AIs in a virtual environment? If such an experiment succeeded, it would be the most immoral in history,

Why do mathsy guys think they understand morality or history? They simply haven't the training.  

for we don’t know how to achieve that outcome without creating vast suffering along the way.

Let the Chinese do it. We should be telling our kids not to study STEM subjects but spend their time thinking about the vast suffering we cause every time we fart. This is because farting does not actively promote the banning of dicks. Dicks cause RAPE! Did you know that dicks are raping the Environment in the Brazilian rain forest even as we speak yet Joe Biden is refusing to chop off his own bollocks?  

Nor do we know how to prevent the evolution of a static culture.

Deutsch may know some mathsy stuff but otherwise he is stupid and ignorant.  

Elementary introductions to computers explain them as TOM, the Totally Obedient Moron

my computer thinks I am the moron. Also it turns itself off anytime some nice porn comes on the screen.  

—an inspired acronym that captures the essence of all computer programs to date: They have no idea what they are doing or why. So it won’t help to give AIs more and more predetermined functionalities in the hope that these will eventually constitute Generality—the elusive G in AGI. We are aiming for the opposite, a DATA: a Disobedient Autonomous Thinking Application.

No. We are aiming at infinite impulse recurrent networks which generated directed cyclic graphs which can't be 'unrolled' into a strict feedforward network. 

How does one test for thinking?

Check for brain waves or some such sciencey stuff. If there is no brain activity, that's a good reason to pull the plug on life support.  

By the Turing Test? Unfortunately, that requires a thinking judge.

No. An AI could do the screening. This is like detecting Twitter bots,  

One might imagine a vast collaborative project on the Internet, where an AI hones its thinking abilities in conversations with human judges and becomes an AGI. But that assumes, among other things, that the longer the judge is unsure whether the program is a person, the closer it is to being a person.

Nonsense! The fact that I may mistake a mannequin for a lovely lady who is ignoring me because she thinks I'm a drunken bum with lousy pick-up lines does not mean that the mannequin is close to being a person.  

There is no reason to expect that. And how does one test for disobedience?

Is the kid knifing you? No? Then he is obedient enough.  

Imagine Disobedience as a compulsory school subject, with daily disobedience lessons

which end quickly coz teechur gets knifed 

and a disobedience test at the end of term. (Presumably with extra credit for not turning up for any of that.) This is paradoxical.

It is silly. Anyway, we've all seen 'Dead Poets Society'. That didn't end well. 

So, despite its usefulness in other applications, the programming technique of defining a testable objective and training the program to meet it will have to be dropped.

Just as ordinary methods of valuing tech companies were dropped. That won't end well.  

Indeed, I expect that any testing in the process of creating an AGI risks being counterproductive, even immoral,

this silly man thinks he is the fucking Pope! 

just as in the education of humans. I share Turing’s supposition that we’ll know an AGI when we see one, but this partial ability to recognize success won’t help in creating the successful program.

I have a more than partial ability to recognize when I've been successful in having sex. This doesn't mean I won't die a virgin.  

In the broadest sense, a person’s quest for understanding is indeed a search problem, in an abstract space of ideas far too large to be searched exhaustively.

To understand is to forgive or simply accept that this shite requires a real high IQ.  

But there is no predetermined objective of this search. There is, as Popper put it, no criterion of truth, nor of probable truth, especially in regard to explanatory knowledge.

Popper was wrong. There is a protocol bound, juristically 'buck stopped' criteria for truth in various different professions. This does not mean that truth aint defeasible or sublatable.  

Objectives are ideas like any others—created as part of the search and continually modified and improved. So inventing ways of disabling the program’s access to most of the space of ideas won’t help—whether that disability is inflicted with the thumbscrew and stake or a mental straitjacket.

But we soon stop funding anything which aint useful save for strategic reasons.  

To an AGI, the whole space of ideas must be open.

But the space of ideas is costly to explore. Deutsch could have read up on history the way I did. But the opportunity cost, for him, was too high. Low IQ peeps like me are welcome to do Accountancy and read history books. Smart peeps should do STEM subjects. We can't stop them from writing nonsense but we can have a good laugh at them when they do.  

It should not be knowable in advance what ideas the program can never contemplate.

Otherwise the G in AGI won't apply. Still, we don't expect an AGI to get ideas about how to bunk off skool without getting the black slapped off you by Mummy.  

And the ideas that the program does contemplate must be chosen by the program itself, using methods, criteria, and objectives that are also the program’s own. Its choices, like an AI’s, will be hard to predict without running it (we lose no generality by assuming that the program is deterministic; an AGI using a random generator would remain an AGI if the generator were replaced by a pseudo-random one),

By Razbarov-Rudich, we can never be sure this isn't the case.  

but it will have the additional property that there is no way of proving, from its initial state, what it won’t eventually think, short of running it.

Even if we run it, we don't know when and if it will halt and thus 'what it will eventually think' is unknown. This is an example of the intensional fallacy.  

The evolution of our ancestors is the only known case of thought starting up anywhere in the universe.

Because human thought is something humans attribute to humans. But we also say to the cat 'I know you are thinking you can pounce on the goldfish the moment I'm out of the room. But, let me tell you, Madam, I will punish you severely if you do any such thing.' That's when the cat says 'I'm not a cat. I am your wife. You want to get at my pussy which is why you talk to me in this way. But if you think you're getting lucky tonight, think again.'  

As I have described, something went horribly wrong, and there was no immediate explosion of innovation:

there was low capital-intensive innovations of various sorts. That's how we were able to move out of sunny Africa and end up as Eskimos.  

Creativity was diverted into something else.

Adapting to various different climate zones and vastly different food sources.  

Yet not into transforming the planet into paper clips (pace Nick Bostrom). Rather, as we should also expect if an AGI project gets that far and fails, perverted creativity was unable to solve unexpected problems.

One big problem with current AI is that it is very energy intensive. Maybe the future is analogue because of much lower energy requirements.  

This caused stasis and worse, thus tragically delaying the transformation of anything into anything. But the Enlightenment has happened since then.

It happened after Europe started colonizing increasing portions of America, Africa, South Asia etc. More resources meant more money could be spent on Education and Research. Also, kicking the Papacy in the goolies proved useful.  

We know better now

What we know is that there could be a tech bubble and an energy crunch and deteriorating terms of trade for intellectual property creators. Also, the Chinese may eat our lunch while we sit around discussing the immorality of dicks and the need to show empathy to homosexual AGIs which are being slut-shamed by analogue computers.  

No comments: