Friday, 21 April 2023

Michael Roberts, AI and Catastrophism - Part 1 of 6

In an article in the Weekly Worker, Michael Roberts, looks at the role of AI, and whether it offers solutions for capitalism. The discussion on AI is interesting, though mostly wrong, but is, of course, also a vehicle for Roberts to discuss his pet subject of the law of the tendency for the rate of profit to fall, being the cause of crises of overproduction of capital, and consequently of his catastrophism, which is at odds with the Marxist perspective of revolutionary optimism, as discussed by Lenin in his critique of Sismondi and the Narodniks. I've set out why Roberts' arguments, in relation to The Law of the Tendency for the Rate of Profit to Fall, and crises of overproduction, are wrong, many times before, so I'll deal with that later, and start, briefly, with his views on AI.

Even 40 years ago, magazines provided do it yourself programmes for AI that you could put on your computer for entertainment. I remember one for, an Amstrad PCW, using the Dr. Logo language that came with it. The programme allowed you to set up an increasing number of variables to store data used to ask questions, and responses. It would ask you to think of an animal, and then try to ask questions that allowed it to determine which animal you were thinking of. As it went, it acquired a larger library of animals, and of questions to ask to determine each type of animal.

Roberts seems to not have fully grasped this fundamental aspect of what AI is, as something that learns as it goes, and does so at an increasing if not exponential rate. So, for example, he cites chats he had with ChatGPT, having asked it questions about who he is, and what his book is about, and what Marx's Law of the Tendency for the Rate of Profit to Fall says. He admits that it gets most of this right, but fails to recognise that getting things wrong is not only fundamental to the way humans learn but also to the way AI learns. If ChatGPT, already, mostly, got right who Michael Roberts is – a pseudonym, in any case, of a fairly unknown individual – including the fact that, in his real life persona, he has also been employed in the banking and finance industry, then that is quite impressive, and could only become more impressive, by quickly being corrected in those bits it got wrong.

Roberts also asked the AI about Marx's Law of the Tendency for the Rate of Profit to Fall, and complains that, again, although it broadly gets the description right, it does not do so fully. But, of course, and, whilst I would agree its description is not fully accurate, Roberts' own conception of that law, and so what is lacking, is different to mine – and also different to what Marx actually says about it. Again, widespread interaction will quickly enhance that, although the danger, as with all AI, of this type, is that its description will then be determined by the inputs it receives from those with whom it interacts, and will not, necessarily, reflect what the Law actually says, unless it is able to process the law as set out by Marx, and so on. This is where other AI based purely on machine learning, and interaction with the real world, is superior to those that learn mostly via verbal interaction with humans. The former, however, is where most concern at an existential threat to humans from AI has been expressed.

For example, AI systems that are set a given task to be achieved, such as a stick figure walking from one spot to another, learn by attempting, over and over again, to achieve that task, each time the programme adapting, and creating, themselves, new programmes and algorithms, in the same way that primates learned to walk upright. In fact, the best AI systems to achieve that are ones that have used several initial models, and then use “evolution”, so that the ones that performed worst die off, and the more successful ones, “breed” to more quickly evolve.
Roberts also says that AI is incapable of dialectical thinking. From what I've said above its quite clear that that is not true. Roberts asked ChatGPT “Can A be equal to A and at the same time be different from A?”. It came back and set out why, according to formal logic, it cannot. Of course, its not alone in such an answer, because most bourgeois philosophers, mathematicians etc., would give the same answer. Indeed, Michael Roberts himself, in practice, gives the same answer, when considering the use of historic pricing, rather than current reproduction costs, because he rejects the concept of “simultaneity” in insisting upon a temporal view in which inputs are not simultaneously outputs, and vice versa.

Others who support the concept of historic pricing, have more explicitly rejected the idea that A can simultaneously be equal to not A, as I have set out, in the past, in discussing this question with Nick Rogers. And, of course, those that propose the TSSI, and reject the concept of simultaneity implicit in the dialectic, as against the syllogism, can trace that divergence back to the one-time “Marxist” James Burnham, who put forward those ideas in his “Science and Style”, refuted by Trotsky. Indeed, its no coincidence that many of those that promote this view are adherents of the petty-bourgeois, Third Camp tradition of Burnham and Shachtman.

But, in fact, this concept that A is equal to A, and simultaneously not equal to A, is fundamental to computer programming itself, contrary to Roberts' assertion. Take a simple task such as counting, which computers need to do in order to know how many times, for example, they have performed a specific function (subroutine). The means to achieve that is by setting a variable A equal to 0. Then, a subroutine is established, which takes this initial value of A, and says, A = A + 1. So, now, A is both equal to 0, and to 1 (0 +1) simultaneously, i.e. to itself and some supplement to itself. If we want to run this subroutine 10 times, we insert a line of code, at the start of it, that says “Do this whilst A is < 10”. So, then, at the end of running the subroutine, A will start with an initial value of 0, and we would increment it by 1 so that A (0) = A + 1 (0 + 1 = 1), and so on. After the sub-routine has run ten times, the value of A will have reached 10, so that when it comes to run again, it will see A = 10, and stop.

This basic aspect of computer programming clearly uses dialectics rather than formal logic, and is inherent in any process of change be it accretive or otherwise. What is more, at the heart of every piece of computer hardware, and all electronic equipment, is quantum mechanics, whose whole basis is dialectical, and based upon things in the real world being in two different states simultaneously, as most famously described in the form of Schrodinger's Cat. But, we are now also moving into the era of quantum computing itself.

In terms of the application of AI, Roberts seems to want to have it both ways. His petty-bourgeois pessimism leads him to want to deny it can have any significant role in raising productivity, whilst his catastrophism leads him to want to think that, its function, will be to only enhance The Law of the Tendency for the Rate of Profit to Fall, which requires that it does significantly raise productivity. In fact, there are many spheres in which AI already is raising productivity, though not yet for the economy as a whole. For the reasons I have set out elsewhere, that is to be expected at this phase of the long wave cycle. Capital only engages in large scale technological revolutions, and the roll out of new technologies on a wide scale, when labour shortages have become such as to cause wages to rise, and profits to be squeezed, resulting in a crisis of overproduction of capital. We are not at that stage.

As in the 1950's and 60's, the relative surplus population is being reduced, and wages are rising, but not yet enough to significantly squeeze profits, which remain at high levels. It is easier for capital to simply roll out more of the existing technology, and employ additional workers. But, for example, an AI has just been introduced that can detect, and 3-D map, a brain tumour, in about half an hour that would have taken a human many hours to do. This not only means that tumours can be more quickly modelled, and so lives saved, but it means that labour and capital is freed to be able to actually do more of the operations themselves. This is also another aspect of Marx's law that Roberts fails to grasp.

No comments:

Post a Comment