Charles Barbour, Western Sydney University
Back in 2005 – before the rise of social media or smart phones, let alone blockchain, metadata and OpenAI – computer scientist and entrepreneur Ray Kurzweil published a breathlessly prophetic account of what he called “the singularity”.
Kurzweil meant a moment in the not-too-distant future when super-intelligent technology would suddenly exceed all imaginable human capacities, absorb humanity into its operations, and spread its mastery across nothing less than the universe itself. The Singularity is Near, his title ominously declared. And he was confident enough in his calculations to offer a precise date: 2045.
This year, almost exactly halfway between 2005 and 2045, Kurzweil released an update on his prophesy. It was essentially the same prognosis, but with a somewhat less ominous sounding title: The Singularity is Nearer.
To understand Kurzweil, and the techno-prophets who have followed his lead, it is worth thinking a little about the nature of prophesy itself. For even in its ancient and religious forms, the purpose of prophesy has never really been to predict the future. It has always been to influence the present – to convince people to live their lives differently today, in preparation for a tomorrow that can only ever be hypothetical.
In this context, it would be interesting to ask why so much of the discourse around emerging technologies has become so apocalyptic in tone. What exactly is such discourse likely to accomplish? Does predicting the impending eclipse of humanity give anyone a reason to act now or change any aspect of their lives? Or is the projected inevitability more likely to convince people that nothing they do could possibly have any consequence?
No doubt, there is something darkly appealing about declarations of the end of times. Their ubiquity throughout human history suggests as much. But there are more productive, more balanced – if less sensational – ways of thinking and speaking.
Without going all the way over to “the singularity”, can we construct a genuine account of what is singular about our contemporary experience and the way it is being shaped by the machines we build?
Review: Techno: Humans and Technology – Marcus Smith (University of Queensland Press)
Marcus Smith’s new book Techno: Humans and Technology is among the more levelheaded approaches to the topic.
Of course, like everyone else working with this genre, Smith is quick to propose that the present moment is exceptional and unique. The very first sentence of his book reads: “We are living in the midst of a technological revolution.” References to the concept of “revolution” are scattered liberally throughout.
But the central argument of Techno is that we must regulate technology. More importantly, Smith argues that we can. An associate professor of law at Charles Sturt University, he suggests the law has more than enough resources at its disposal to place machines firmly under human control.
In fact, on Smith’s account, Australia is uniquely situated to lead the world in technological regulation, precisely because it is not home to the large tech corporations that dominate American and European society. That explains why Australia is, in Smith’s words, “punching above its weight” in the field.
The threat to democracy
Smith breaks his book down into three tightly structured sections that examine technology’s relation to government, the individual, and society.
In part one, he engages with large scale political questions, such as human created climate change, the application of AI to every aspect of public life, and the systems of social credit made possible by digital surveillance and big data.
Perhaps Smith’s most interesting argument here concerns the similarity between the notorious social credit system employed by the Chinese government and systems of social credit developed by commercial forces.
It is easy to criticise a government that uses a battery of technological methods to observe, evaluate and regulate the behaviour of its citizens. But don’t banks collect data and pass judgement on potential customers all the time – often with deeply discriminatory results? And don’t platforms like eBay, Uber and Airbnb employ reputational credit scores as part of their business model?
For Smith, the question is not whether social credit systems should exist. It is almost inevitable that they will. He calls on us to think long and hard about how we will regulate such systems and ensure they are not allowed to override what he deems the “core values” of liberal democracy. Among these, Smith includes “freedom of speech, movement and assembly”, and “the rule of law, the separation of powers, the freedom of the press and the free market”.
Part two of Techno turns its attention to the individual and the threat emerging technologies represent to privacy rights. The main concern here is the enormous amount of data collected on each and every one of us every time we engage with the internet – which means, for most of us, more or less all the time.
As Smith points out, while this is clearly a global phenomenon, Australia has the dubious honour of leading the world’s liberal democracies in legislating governmental access to that data. Private technology companies in Australia are legally required to insert a back door to the encrypted activities of their clients. Law enforcement agencies have the power to take over accounts and disrupt those activities.
“The fact is that liberal-democratic governments act the same way as the authoritarian regimes they criticise,” Smith writes:
They may argue they only do it in specified and justified cases under warrant, but once a technology becomes available, it is likely that some government agency will push the envelope, believing its actions are justified by the benefits of their work for the community.
The emergence of big data thus inevitably “shifts liberal democracies towards a more authoritarian posture.” But, for Smith, the solution remains ready to hand:
If rights such as privacy and autonomy are to be maintained, then new regulations are essential to manage these new privacy, security and political concerns.
Practical difficulties
The final part of Techno focuses on the relationship between technology and society, by which Smith largely means economics, and markets in particular.
He provides a helpful overview of the blockchain technology used by crypto-currencies, which has promised to mitigate inequality and create growth by decentralising exchange. Here again Smith avoids taking either a triumphalist or a catastrophising approach. He asks sensible questions about how governments might mediate such activity and keep it within the bounds of the rule of law.
He points to the examples of China and the European Union as two possible models. The first emphasises the role of the state; the second is attempting to create the legislative conditions for digital markets. And while both have serious limitations, some combination of the two is probably the most likely to succeed.
But it is really at the very end of the book that Smith’s central concern – regulation – comes to the fore. He has no difficulty stating what he takes to be the significance of his work. “Technology regulation,” he writes, “is probably the most important public policy issue facing humanity today.”
Declaring that we need to regulate technology, however, is far simpler than explaining how we might do so.
Techno provides a very broad sketch of the latter. Smith suggests that it would require “involving the key actors” (including technicians, corporations and ethicists), “regulating with technology” (that is, using technological means to impose laws on technological systems), and establishing “a dedicated international agency” for coordinating regulatory processes.
But Smith does not really reflect on the complexity of implementing any of these recommendations in practice. Moreover, it is possible that, despite his considerable ambition, his approach stops short of capturing the true scale of the problem. As another Australian academic, Kate Crawford, has recently argued, we cannot understand intelligent technologies simply as objects or tools – a computer, a platform, a program. This is because they do not exist independently of fraught networks of relationships between humans and the world.
These networks extend to the lithium mines that extract the minerals that allow the technology to operate, the Amazon warehouses that ship components around the globe, and the digital piecework factories in which humans are paid a subsistence wage to produce the illusion of mechanical intelligence. All of this is wreaking havoc on the environment, reinforcing inequalities, and facilitating the demolition of democratic governance.
If the project of regulation was going to touch phenomena of this sort, it would have to be much more expansive and comprehensive than even Smith proposes. It might mean rethinking, rather than simply attempting to secure, some of what Smith calls our “core values”. It might require asking, for instance, whether our democracies have ever really been democratic, whether our societies have ever really pursued equality, and whether we can continue to place our faith in the so-called “free market”.
Asking these kinds of questions certainly wouldn’t amount to an apocalypse, but it could amount to a revolution.
Charles Barbour, Associate Professor, Philosophy, Western Sydney University
This article is republished from The Conversation under a Creative Commons license. Read the original article.