Issue 6, Semester 1, 2019 MICHAEL FRANZ At some point during the next thousand words or so I’m going to start sounding like a crazy person, so I may as well rip that band-aid off right now. Sometime in the near future a computer will be doing your job. To be clear, what I mean by this is not just that increasing automation of services in the legal profession will attenuate demand in the graduate job market. I mean quite literally that computers will drive the human lawyer to extinction. I can feel the wave of eye-rolling that claim just prompted from everybody who has done Commercial Cyberspace. I get that response; so often these arguments boil down to the kind of unsophisticated futurism effused by Kurzweilian fanboys. But if you can momentarily suppress that cynicism, you need only grant two very plausible premises to recognize what is coming down the pipeline. Firstly, that we will continue to make better computers. Secondly, that there is nothing magical about the fuse box inside human skulls.
Now, in the past thirty years we’ve seen automation set fire to dozens of industries. In law firms simple Boolean artificial intelligence (AI) programs are already performing a whole swathe of previously human tasks, from legal triage and research, to case management and billing. The conventional wisdom is that automation will merely absorb unnecessary cognitive busywork, freeing up our mental overhead for the ‘higher-level’ lawyering that only humans can do. Whilst this might drive down the manpower demands of legal work on a per unit basis, it will also decrease price barriers, opening up the market to new customers and applications. As long as the demand for legal services remains somewhat price-elastic, and the market doesn’t bottom out before we hit an efficiency plateau, the end result will be a more accessible legal system with more lawyers, who now get to focus on more interesting work. Hooray. Unfortunately, this vision assumes that organic nervous systems will retain a perpetual advantage over microprocessors in at least some specific areas. What warrants this? I had the opportunity recently to pose this question to the head of innovation at a mid-sized law firm, who responded with a handwave--that lawyers are in the business of selling human brains, and there will always be a need for human brains to sell. This kind of commonplace refrain is blindly optimistic at best, and blatantly obtuse at worst. We aren’t selling brains, we’re selling thoughts and decisions. Grey matter is merely the current iteration of the product, and one which is approaching a senescent stage in its life cycle. The real threat to the human workers isn’t simple ‘narrow’ AI which can perform specific tasks, but the advent of ‘general’ AI – systems that can independently identify and solve problems without specific rules on how to do so. There are dozens of examples of these systems already, such as GPT2, a language modelling system which can perform reading comprehension, world-building, and write passably human prose without task specific training. This system, after being ‘taught’ with 40GB of source text curated from Reddit links, in response to one-sentence prompts, is able to teach itself to write wholly original prose in a variety of novel and unfamiliar formats, including SAT-level essays about the evils of recycling, news reports about unicorns, re-election acceptance speeches of a cybernetically resurrected JFK, and Lord of the Rings fan-fiction (readers of the print edition, please check the online article for the hyperlinks – I promise they are worth it). Although these systems are still crude, the proof-of-concept is on the table – there is nothing special about the information processing performed by wetware. Once computers can reliably replicate general rather than specific task proficiency, coupled with the speed advantages of circuitry over neurons, what possible role does a human have in any part of the legal process? Imagine a future where a computer can take a contract dispute, read all the documents and correspondence, research every relevant judgment, and write a brief, all in a matter of seconds. Even judges may not be safe. Second and third years may recall from Legal Theory Ronald Dworkin’s hypothetical super-judge Hercules, who can, with preternatural mental stamina and bandwidth, read every previous judgment on a legal issue and apply the most exquisitely fine-tuned reasoning to determine the appropriate application of principles to new cases. Well, imagine instead AI systems that can read and analyse the entirety of recorded law, sort it according to precedential and argumentative weight, aggregate its conclusions alongside some additional variables such as social utility, and test every possible permutation of legal arguments against a set of facts. Wouldn’t continuing to use fallible and error-prone human minds instead of this mechanical Hercules be grossly irresponsible? And if you think the process I describe sounds too far-fetched, you are simply guilty of a failure of imagination given the trajectory of history. I understand if you still find these concerns reflexively absurd. I admit, it takes such little time spent away from these thoughts, before upon returning to them they seem like the ravings of a conspiracy nut - and I’m just trying to extend the horizon of your concern to your future employment, to say nothing of a world where humans are hopelessly outclassed by machines in every intellectual domain. But the axioms that underpin this argument are just so hard to deny. You effectively have to say something akin to ‘computers can do x, but will never be able to do y, which is what being a lawyer is really all about.’ Now whatever y-variable you’re tempted to plug into that platitude, ask yourself how resistant to replication it really is. Consider its equivalence to something that a few years ago we would have thought was paradigmatically human, like the writing of beautiful poetry. Yet, the aforementioned GPT2 system when prompted with a few stanzas of Alexander Pope taught itself to write the following. Methinks I see her in her blissful dreams: —Or, fancy-like, in some majestic cell, Where lordly seraphs strew their balmy streams, On the still night, or in their golden shell. Admit it, neither of us could write that. We’re all talking about the efficiency gains of automation, whilst ignoring the creeping extinction of the white-collar worker. We’re standing on the deck of the Titanic, looking at the tip of the approaching iceberg, and remarking about how nice it’ll be to have some more ice to put in our drinks. Michael is a Second Year JD Student and the Managing Editor of De Minimis 2019. Other articles in this issue:
Ray Kurz
9/4/2019 04:30:06 pm
Fantastic, important article.
AI is so so so overrated
9/4/2019 05:57:36 pm
Anyone who thinks AI systems are going to take our jobs; check out the company OpenAI and take a look at some of the ‘miracle’ breakthroughs in AI that have recently taken place.
Failure of Imagination
9/4/2019 06:05:20 pm
But isn't that just denying the author's point? AI looks clunky at the moment, and you might be unimpressed with so called 'breakthroughs' but you can easily see the pathway from where it is now to where it could be given more time and research in computer science which is pretty obviously going to keep happening.
Hugh Mann
9/4/2019 07:12:39 pm
Machines will never be able to understand the vibe Comments are closed.
|
Archives
October 2022
|