
As anyone with an internet connection knows, Artificial Intelligence (AI) has arrived and it’s here to stay. With the rapid advancement computing technologies, many employers, insurers, and attorneys are exploring AI’s potential to streamline workflows and reduce costs. There is no doubt that AI offers significant benefits in the legal world—including in workers’ compensation. Worker’s Compensation is known for its massive datasets, frequent use of standardized forms, constant communication, and (presumably) a relatively predictable legal trajectory. So, workers’ compensation law seems to benefit greatly from automated programs. AI appears to be a welcome gift to the overworked adjuster or time-crunched attorney.
However, it is critical to assess the limitations and risks associated with AI that will often be overlooked in the optimistic rush toward implementing the shiniest new AI tools. In this article, we will briefly discuss the common, and potentially disastrous, downsides of AI in the workers’ compensation realm. This is not to say AI should be avoided completely. AI can provide considerable efficiency and time-saving benefits to its users. Rather, I hope to provide a reminder that even the most useful tools have limitations and risks. It is particularly important in the face of such a powerful technology that we tread cautiously. All that glitters is not gold.
First, while the potential of AI is vast, the technology is still in its infancy. There is much work to be done before AI is capable of handling the legally nuanced detail-oriented tasks of an attorney. Undoubtedly, Workers’ Compensation Law relies on interpreting vast data sets and complex case law. Admittedly, AI is shockingly good at taking in data and outputting predictions and recommendations. While helpful in spotting trends or predicting case outcomes, AI lacks the ability to grasp case-specific or interpersonal nuances that might sway a judge. Workers’ Compensation cases often hinge on uniquely medical facts, jurisdictional interpretations, or credibility assessments that AI cannot evaluate accurately.
As noted above, AI intakes large amounts of data and provides predictive text and analysis. However, even the creators of AI programs do not know exactly how AI arrives at its results. An AI’s ‘thought-process’ is hidden and incapable of being reproduced for scrutiny. As essentially a prediction machine, AI often fabricates information in an effort to provide responses to prompts. AI always produces an answer, not necessarily the answer.
Thus, most frighteningly, AI frequently provides correct sounding, but ultimately false or misleading responses in a phenomenon known as “hallucination.” Without the ability to see how or why AI arrived at an answer, a human reviewing the response may assume that the program is pulling from accurate information. In reality, the AI program could be simply just making something up completely. Examples of such hallucinations abound (famously, Google’s AI search function recommended users eat rocks to improve their health and recommended glue as a delightful pizza topping). It is essential to remember that the responsibility for legal decisions remains with the human lawyer, not the AI system. Relying too heavily on AI could lead to decisions based on faulty or biased data, potentially leading to unethical or illegal outcomes. Defense attorneys should view AI as a tool rather than a decision-maker, ensuring thorough oversight and professional judgment are applied.
Next, it bears repeating that Workers’ Compensation law deals in immensely personal medical and employment data. As a result, data privacy and confidentiality concerns are rampant. AI tools generally require access to large amounts of data to “learn” and produce accurate results. However, incorporating employee health records, employment records, or other sensitive information into an AI system can create significant risks if not carefully managed. Not only could a data breach expose confidential information, but there is also the possibility of non-compliance with privacy laws, which vary by jurisdiction and often impose heavy penalties for non-compliance.
Last, but not least, AI algorithms are not human. While this point is obvious, it is worth keeping in the front of one’s mind. In the race toward industrial efficiency, the first thing lost is humanity. Clients want legal advice, of course; but they also want a person to hear their story. We must not forget the human element of the attorney-client relationship, such as compassion, understanding, and empathy. AI simply cannot sense the subtle twinge of a dishonesty during a deposition or provide a sympathetic ear to a Claimant determining whether to accept a settlement offer. Without this, we risk losing an essential aspect of workers’ compensation claims: the fact that there is a human at the other side of the claim.
While AI can be a valuable resource for employers and insurers, it must be used judiciously. For those defending workers’ compensation claims, awareness of AI’s limitations and a commitment to ethical, informed oversight are critical to successful and compliant use.