Friday 2 June 2023

Kaushik Basu's Artificial Imbecility

The always imbecilic Kaushik Basu has an essay titled 'Governing the Unknown' in Project Syndicate.  

Major advances in AI are raising a raft of concerns about education,

nobody cares if worthless dissertations are written by AIs

work,

shitty jobs should disappear 

warfare,

which AI reduces the risk of 

and other risks that could destabilize human civilization

Civilization depends on intelligence and technology. What destabilizes it is running out of money to spend on defense and law and order

long before climate change does. While policy responses are urgently needed,

Sensible policies are always needed. But 'urgent' responses tend not to be sensible.  

they also must be guided by the right principles.

No. Principles don't matter. Being sensible does.  

ITHACA, NEW YORK – Technology is changing the world faster than policymakers can devise new ways to cope with it.

No. Technology is changing where public policy permit it to change. The world changes for the worse for those who fall behind technologically.  

As a result, societies are becoming polarized,

Basu comes from a part of the world which was already so polarized that it was partitioned on sectarian lines. One reason for this was that it was extremely backward technologically speaking.  

inequality is rising,

Immigration does have that effect.  

and authoritarian regimes and corporations are doctoring reality and undermining democracy.

Guys tripping on acid may be 'doctoring reality'. But authoritarian regimes kill people and thus change reality. Where they exist there is no fucking democracy to undermine. Corporations, on the other hand, have to accept the reality that they will go bust if they don't make a profit. This does mean that those which concentrate on 'doctoring reality' get weeded out.  


For ordinary people, there is ample reason to be “a little bit scared,” as OpenAI CEO Sam Altman recently put it.

Why not also be a little bit pregnant or a little bit dead?  

Major advances in artificial intelligence raise concerns about education, work, warfare, and other risks that could destabilize civilization long before climate change does. To his credit, Altman is urging lawmakers to regulate his industry.

Because regulation creates rents. Also there will be 'Agency Capture' of the Regulator. The fact is OpenAI is anything but 'open'. They don't just want to protect their 'secret sauce', they don't want any fucking competition down the line. That's why they are pretending that AI will send Terminators from the future to kill John Connor.  

In confronting this challenge, we must keep two concerns in mind.

Basu & his ilk can't confront shit.  

The first is the need for speed.

not to mention greed for weed 

If we take too long, we may find ourselves closing the barn door after the horse has bolted.

To Shanghai. I suppose the Americans will back their giant Corporations on 'National Security' grounds. Perhaps Europe will decide to develop an indigenous alternative.  

That is what happened with the 1968 Nuclear Non-Proliferation Treaty:

by which time UK, France, Israel (probably in cooperation with France) and China had gone nuclear. In the Eighties, China started helping North Korea and Pakistan to get the bomb. India had an A-bomb test in 1974 after Nixon threatened to 'nuke Calcutta'. South Africa gave up its deterrent as did- very foolishly- Ukraine.  

It came 23 years too late.

It was meaningless shite. The Brits and Americans had agreed to pool resources under the Quebec agreement of 1943. Initially the Brits wanted to bring France in as well but that changed. Then it was discovered that some Brits were Commie spies and so each country went its own way. Eisenhower had some crazy 'Atoms for Peace' program which helped Islamabad and Delhi get closer to making a bomb. But the real name of the game was submarine based multiple entry ICBM. That's why the 1968 treaty was meaningless. India, Pakistan and Israel never joined. North Korea did and left after its first test. But there are other countries- e.g. Japan which could very quickly gain a thousand nuclear missiles. Taiwan and South Korea mare likely to have some such contingency plan.

If we had managed to establish some minimal rules after World War II,

they would have been ignored. On the other hand if my Baghdad Declaration of 1968- 'be nice. Don't be naughty'- had been universally ratified, naughtiness would definitely have ceased to exist.  

the NPT’s ultimate goal of nuclear disarmament might have been achievable.

More particularly if Stalin had simply been allowed to take over the world.  

The other concern involves deep uncertainty.

Knightian uncertainty militates for 'regret minimization'. Sadly, this means being more permissive because we'd regret falling behind our enemy more than we'd regret destroying the world before that enemy gets to do it. 

This is such a new world that even those working on AI do not know where their inventions will ultimately take us.

But they know OpenAI has an interest in killing off the competition through regulation.  

A law enacted with the best intentions can still backfire.

Only if the Courts interpret it in a manner which is against the public interest. 

When America’s founders drafted the Second Amendment conferring the “right to keep and bear arms,” they could not have known how firearms technology would change in the future, thereby changing the very meaning of the word “arms.”

This is foolish. SCOTUS could have interpreted this to mean you were welcome to keep a muzzle loaded musket above the fireplace. 

Nor did they foresee how their descendants would fail to realize this even after seeing the change.

They didn't foresee that some of their descendants would decide that slavery wasn't a clever idea.  

But uncertainty does not justify fatalism.

It militates for not putting all your eggs in one basket. 

Policymakers can still effectively govern the unknown

Nobody can do so. Policies are meant to influence the future- not to govern it. 

as long as they keep certain broad considerations in mind. For example, one idea that came up during a recent Senate hearing was to create a licensing system whereby only select corporations would be permitted to work on AI.

How very convenient! No doubt, the good folk at DARPA have cozy relationships with those 'select corporations'. 


This approach comes with some obvious risks of its own. Licensing can often be a step toward cronyism, so we would also need new laws to deter politicians from abusing the system. Moreover, slowing your country’s AI development with additional checks does not mean that others will adopt similar measures. In the worst case, you may find yourself facing adversaries wielding precisely the kind of malevolent tools that you eschewed. That is why AI is best regulated multilaterally, even if that is a tall order in today’s world.

It is impossible. What should worry us is that Russia has suspended 'New START' and is threatening to do nuclear proliferation into Latin America. 

Another big concern is labor.

The concern, even for China, is that kids don't want to do it. Either you bring in immigrants- in which case 'demographic replacement' triggers a backlash- or else resort to robots and AIs to provide their software.  

Just as past technological advances reduced demand for manual labor, new applications like ChatGPT may reduce demand for a lot of white-collar labor. .

That's already happened.  

But this prospect need not be so worrying. If we can distribute the wealth and income generated by AI equitably across the population,

We can't. Get over it.  

eliminating plenty of work would not be a problem. Far from being diminished by not working, feudal lords were aggrandized by their leisure.

Nope. Feudal lords had to spend a lot of time fighting. Otherwise they stopped being feudal lords. Nineteenth Century Industrial Capitalism did create a leisured class- in 1900, ten percent of the British population had a private income while another tenth were their servants- but two World Wars changed that. Even gentlemen have to fight just to preserve their lives even though this means they stop being gentlemen and have to get jobs and cook their own meals and wash their own dishes. 

The problem, of course, is that most people do not know how to use free time. Pensioners often become anxious because they do not know what to do with themselves. Now, imagine that happening on a massive scale across younger cohorts. If left unchecked, crime, conflict, and perhaps extremism would become more likely.

So, there will be plenty of jobs in the surveillance and incarceration industries.  

Averting such outcomes would require modifying our education systems to prepare people for the leisure force.

That's what we were told in School back in the mid Seventies.  

As in earlier eras, education would mean learning how to enjoy the arts, hobbies, reading, and thinking.

None of which need be 'learned'. On the other hand, everybody should get a State funded PhD in masturbation.  

A final major concern involves media and the truth. In How to Stand Up to a Dictator, the Nobel laureate journalist Maria Ressa

a silly woman who thinks Mark Zuckerberg was responsible for Duterte coming to power.  No doubt, he also helped Bongbong's daddy become dictator nineteen years before he was born. AIs can do time travel- like in the Terminator franchise. 

laments that social media has become a powerful tool for promoting fake news.

But not as powerful as talking.  

As Amal Clooney points out in her foreword to the book, autocratic leaders can now rely on “an army of bots” to create the impression that “there is only one side to every story.”

Amal comes from Syria. Assad's army uses bullets, not bots.  

This is a bigger challenge than most people realize.

It is non-existent. It's like the notion that pop-up ads would destroy civilization. What AI can do, AI can undo.  

It will not go away even if we pass laws prohibiting automated disinformation.

Yes it will if we enforce those laws by confiscating the wealth of Corporations involved in distribution while executing those who fabricate the stuff.  

As Amartya Sen pointed out more than 40 years ago, all description entails choice.

No useful description involves choice. The information which is salient is 'Schelling focal' and not a matter of our personal choice at all. Thus when describing a person to an investigator or judge, you have no choice but to refer to things like height, weight, eye-color, shape of face, etc.

Reality is so complex that we cannot possibly represent it without making decisions about what to include and what to omit.

We are not required to 'represent' Reality. We may be required to do something useful. But that will tend to be protocol bound. It is a different matter that a Picasso may say that he is representing some shite or the other. That does not matter. What matters is what the market will pay for his shite.  

In a world that is drowning in information, savvy influencers do not need to make up news; they can simply be biased in what they choose to report.

In which case they are making up a Structural Causal Model according to which stuff they focus on has a big effect on outcomes.

News outlets can influence voters’ opinions in ways both subtle and flagrant.

They can also lose a lot of money by virtue signaling in a manner which drives the median voter towards their enemy. 

Just compare the images of Donald Trump and of Joe Biden that Fox News chooses.

Basu isn't biased against Trump. Perish the thought.  

We cannot solve the problem of authoritarian influence by banning fake news.

We can ban fake news. Indeed, we can even criminalize defamation such that those who allege 'authoritarian influence', or 'Fascism',  where no such thing exists are sent to jail. 

Our best hope again lies in education.

Basu has plenty of education. But he is as stupid as shit. 

We will need to do a better job teaching people to be discerning and less susceptible to manipulation.

No manipulation occurs if people who want to pay less in tax vote for a guy who gets them lower taxes. That's what Trump did. Similarly, Duterte or Bukele promised to kill bad guys and then actually killed bad guys because that's what voters wanted. 

Innovation in law and policy must go hand in hand with innovation in education,

Education needs to be purged of useless shite.  

and all are necessary to keep up with innovation in technology.

No. Innovation in technology depends on money. So does Education. Be sensible, and let people make money by doing sensible things. Technology and Education and AI Terminators from the Future will then be able to take care of themselves.  


No comments: