From Textualism to AI, iLegi§ Conference Examines Emerging Issues in Legislative Drafting
November 07, 2022
Legal experts from around the world gathered at the D.C. Bar on November 3 and 4 for iLegi§ 2022, the seventh International Conference on Legislation and Law Reform, presented by the Federal Bar Association. The event featured panel discussions on a wide array of issues facing lawyers and legislators in their efforts to construct legal systems that are both fair and durable, as well as sessions on emerging opportunities and challenges posed by widespread changes in language, culture, and technology.
Joseph Kimble, distinguished professor emeritus at Western Michigan University Cooley Law School, appeared virtually to lead the session “The Courts’ Overuse and Misuse of Dictionaries.” Kimble spent much of his career teaching legal writing and publishing several texts on the importance of using plain language in law, business, and government. During the program, Kimble cautioned against the increasing use of laypersons’ desk references in drafting and interpreting legislation.
Kimble said the use of lay dictionaries has become a problem due to the growing dominance of textualism, the legal model that centers the interpretation of laws on the meaning of the words used in the legislation at the time of drafting. This philosophy, popularized by Justice Antonin Scalia, has resulted in the overuse and misuse of dictionaries, according to Kimble.
“I’m not a fan of textualism as it is practiced,” said Kimble, founding director of the Center for Plain Language. “I think that, as practiced, it has produced, almost invariably, ideologically conservative results in high-profile cases.”
He went on to characterize the use of dictionaries in legal decisions as formalist structures deployed to create the cover of objectivity for subjective and politically motivated outcomes. “Textualism is the brand name for ideologically conservative ideology,” he said.
Kimble’s study of the Michigan Supreme Court decisions issued since 1845 found a dramatic increase in citation of lay dictionaries beginning in the 1980s and peaking in the period of 2005–2014, when 40 percent of decisions included a dictionary definition, as compared to 1 percent or fewer in pre-1980 decisions.
Kimble also offered three examples of Michigan court decisions where the outcome was determined by which dictionary was consulted. In each instance the majority cited a dictionary definition, while the dissenting opinion came to the opposite outcome after consulting another definition.
Kimble said lexicographers and scholars are critical of the use of lay dictionaries in court decisions because dictionaries vary significantly and serve different purposes. According to Kimble, drafters of dictionaries fall into one of two categories: “lumpers” (those who seek to include as many variant meanings as possible for a single word) and “splitters” (those who angle for precision).
These differing philosophies mean that definitions may vary significantly, and even within a single dictionary multiple definitions may exist for the same or similar words. The result, according to Kimble, is arbitrary and unsystematic. Definitions are cherry-picked to reach the conclusion desired by the authority. “Judges can find any definition they want in the dictionary superstore,” he said.
Much newer technology was the focus of a session conducted by Cathy Pagano, board member of the Women’s Bar Association, and Kelsey Kober, senior manager, public sector, of the Information Technology Industry Council. In “How Can We Legislate Algorithms? Lessons Learned at the State and Local Levels,” the pair provided an overview of recent efforts to legislate the use of artificial intelligence (AI), automated decision systems, and other advanced technologies.
The area is one of growing interest for legislators. “Nineteen bills have been introduced in the U.S. Congress this session that mention algorithms in some fashion,” Pagano said, citing the Algorithmic Accountability Act of 2022 as one significant example. This bill would direct the Federal Trade Commission to require impact assessments of automated decision systems and augmented critical decision processes.
However, no comprehensive legislative or regulatory framework has been implemented on a national level. In its absence, state and local authorities have begun to propose solutions. “A lot of time, in the policy world, states will seek to make legislative progress in areas where they perceive the federal government to have stalled,” Kober said.
Alabama, Colorado, Illinois, and Vermont have all passed laws establishing some sort of oversight board or regulatory mechanism to evaluate the use of AI and automated decision making. Except for Vermont, those three states have also passed legislation limiting the use of AI. Several states and localities, including California, Washington state, Baltimore, the District of Columbia, and New York City, have proposed legislation that would regulate the use of automated decision making, said Kober, whose organization worked with New York to establish a task force addressing the issue.
Kober proposed three categories for prioritizing the oversight of AI: (1) situations in which the use of algorithms is presumptively appropriate, (2) scenarios that require regulatory involvement and oversight, and (3) algorithms and automated decision systems that are presumptively inappropriate. “I think many people would agree that an algorithm alone should not decide whether someone gets a mortgage,” Kober said, indicating the inherent risks of introducing bias in AI coding.
Standards organizations will inevitably play a role in regulating the use of algorithms. The rapidly changing nature of technology and the continuing emergence of new use case scenarios will require ongoing monitoring and adjustment, Kober said.
Other matters go beyond the legislative purview. Diversity, equity, and inclusion issues require the involvement of more diverse programming teams, Kober said. “The workforce is overwhelmingly white and male right now,” she said.
Scholarships, mentorships, and other programs that increase the diversity of tech sector workforces and STEM industries will provide a longer-term solution for bias issues. “I don’t think you can separate workforce diversity issues from the broader tech policy discussion,” Kober said.