Survey: Formatting Rules

Among benchmarking concerns, librarian smackdowns, and fines for hallucinations, I fear I sometimes fail to take advantage of ‘good enough’ solutions. I share this internal tool to demonstrate why perfect doesn’t have to be the enemy of good.

Need:

We use our client’s templates as a model for desired formatting but we also like to be sure our automated templates conform to local rules. So, we threw together this quick 50-state survey. It’s not perfect, not at all. But it’s a great starting point and a tool we can reform as we continue to use it. It’s ‘good enough.’ 

The Build Process:

Step One Prompt: After testing a couple prompts in CoCounsel, I found I had better results working directly from Westlaw’s AI Jurisdictional Surveys tool. I honestly couldn’t say why. Was it fresh prompts? Or is different tech running each tool? All I can say is that next time, I’ll likely start with the survey tool, not CoCounsel which performs the exact same function. 

Step Two Download: I wanted a table but unfortunately, Westlaw’s spreadsheet download only includes the rules and KeyCite information, not the AI text output. Too bad but not a huge obstacle. I downloaded the Word file and pasted the text into a Google doc. I’m not a CoPilot user, though that might’ve worked. At any rate, I was headed towards Google’s Looker. So for this, I used Google Workspace.

Step Three Convert: While in the Google Doc, I asked Gemini to convert the text to a table. Each state should be a row and the subheadings should be column headers. This did not work. The states mysteriously appeared as column headers. So, instead, I uploaded the document to Gemini Pro and got much better results with the same prompt. Again, why? And, should I care? The fact is, I didn’’t care and moved on to the next step.

Step Four Visualize: Next, I added the sheet as a data source to a Looker project. I begin to format the report as desired. Two things occurred to me at this point. First, who cares about making it pretty? This is essentially an internal tool. Let’s just find a formatting template that makes the report easy to read. Second, where are my citations? Westlaw’s table download deleted the text I wanted. Gemini’s table conversion omitted the citations. Curses. I had to back track and ask for a column for the citations. After re-adding the data and applying a formatting template, I finished the project.

Time to Completion:

~60 minutes

Initial Conclusions:

Optimal Pathways: If I had known the optimal path ahead of time (i.e., AI Survey v. CoCounsel), this survey would’ve been a 30-minute project, not a 60-minute project. Plus, I could have shaved off more time by being sure I had the data I needed at each step along the way. I’m not complaining. Sixty minutes was well worth the effort but I have to wonder about the value of benchmarking with such a diverse and fluid marketplace. 

Professional Bias: I seem to have some work to do to reframe my thinking around inputs and outputs. I was trained in Boolean, KeyNumbers, Keywords, and the like. Something like a simple Boolean search in a case law database begins with a rules-based structure (dog within three words of bite, for example) and I’m comfortable reviewing these results for their substantive content only. With AI, my rules-based assumptions have to be challenged at each project stage. Am I getting what I need? The output is not strictly determinative even at the lowest temperatures. Seems silly to have to say out loud but … there you have it.