Making AI Work for You: A Guide to Practical & Ethical Use

  • Print Page

Making AI Work for You: A Guide to Practical & Ethical Use

By Erich Dylus

Some lawyers may still be reluctant to use artificial intelligence at work, but ignoring it could have a detrimental effect on one’s ability to competently represent clients. 

Using AI in your practice may feel daunting, complicated, or unethical, but it doesn’t have to be. With some basic understanding of the technology’s strengths and weaknesses, you can utilize generative AI large language models like ChatGPT or Claude as tireless, genius “AInterns” augmenting your work. 

This article seeks to (1) clarify practical, actionable ways to put AI to work; (2) explain straightforward prompting strategies for better results; and (3) discuss how and why to consistently prioritize privacy. For an adaptable and impactful AI strategy, start with high-upside, low-downside subjective use cases, invest time in effective prompts and source review, and control what data leaves your device. 

Practical Purposes

Overcomplicated AI integrations can cause professionals to lose sight of what makes generative AI special — the ability to produce new content by combining vast bases of knowledge with reasoning-like processing. First, one must recognize that this groundbreaking technology is based on contextual, data-driven predictions about what word should come next in a humanlike response. 

Generative AI is probabilistic (providing different outcomes for the same input), not deterministic (giving predictable and consistent results). It is more like a restaurant than a vending machine, or more like a search engine than a calculator. Understanding this is imperative. Professionals cannot rely on an AI model to be fully predictable and accurate. So why are some so effusive about AI if it’s imperfect? 

Think of generative AI as a boundless source of brilliant ideas and suggestions rather than omniscient truth — a collaborator offering enhanced capabilities and services, not your replacement. Think of AI as a genius intern.

It’s better to start an AI intern with low-stakes tasks benefiting from shrewd creativity (issue-spotting, analysis, strategy critique, early drafting) rather than mission-critical matters. Give AI models precise instructions, instruct them to ask any clarifying questions before proceeding, and require them to provide verifiable citations for their sources. AI does not have personalized expertise and may be overly eager to please — so eager it may stretch some interpretations or hallucinate sources. Professional supervision and guidance are key.

Testing this technology with personal or nonbillable work like background research, article writing, and marketing or branding can help overcome initial reluctance. AI for “efficient workflows” can dissuade newcomers who are (justifiably) skeptical of replacing traditional processes with blindly trusted software. AI does not guarantee perfectly correct answers by design, and legal ethics require human review to combat hallucination, misinterpretation, and gullibility issues (just as you would review the work of an intern or junior associate). 

For this reason, specialized legal AI services should be met with skepticism if they promise to replace legal workflows entirely. Many such services simply leverage the leading general-purpose models on the backend and differentiate themselves through bespoke work (in-house data training, custom deterministic software automations, or proprietary IP (LexisNexis, Westlaw). Legal AI products often use structured prompt frameworks hidden from the user (specific prompt instructions for the AI model that attach to each user prompt, known as “system prompts”) to automate multistep tasks and produce more refined results but still require output review and a close eye on the data the system can access. 

Common hiccups in legal AI implementation include frustration with initial setup and imprecise answers at first. You may also encounter overly complex services that are simply more confusing than the models with a familiar day-to-day chat interface (which might get secretly used by employees seeking a simpler experience). These problems can be due to misunderstandings of AI’s strengths and weaknesses, overpromising vendors, and poor prompting or workflows. General-purpose models can bring massive improvements to your work at low expense. While precise outputs cannot be promised, proper prompting improves performance. 

Performant Prompting

Humans cannot match the response time nor stamina of AI models, which provide highly sophisticated and diversely cited second opinions, critiques, and improvements nearly instantaneously from a massive dataset. AI models do not mind responding to the same question repeatedly, debating other models, or researching 24/7. To effectively communicate your intentions and maximize AI output quality, your prompt needs structure and content-rich directives. A simple formula for effective prompts is as follows: 

Role + Task + Instructions + Context

Role provides a quick setting and positioning for the model (“You’re an expert  ___ at ___.”). Specify style and audience to dictate formality and motivation. If you want to reflect a style or voice in your prompt (such as by attaching a sample document), tell the AI model!

Task lays out what the output should accomplish (“Your task is to review the attached ___ and provide ___. Ensure any research you perform is cited.”). Reviewing output and verifying sources is a duty that will not disappear; the model can help you (“provide a link to the statute/document”), but you remain ultimately responsible to check the sources.

Instructions more precisely configure the task (“Your response should ___ and must not ___. Ask any necessary clarifications before you proceed.”). Asking for clarification questions preemptively reduces misinterpretations (and therefore divergent responses) and further hones the instructions.

Context provides priority source material and further parameterizes the output (“Attached and pasted below are ___ and ___, which you must ___.”). No need to stress the input formatting; models strip text into tokens (units of text that may be a few characters or a partial word) 
and use an agnostic “language of thought” in processing.

Here are some additional prompting tips:

  • Start strong. If you receive a clearly wrong or unexpected output, start over with a better initial prompt rather than trying to fix it and creating layered, latent misunderstandings that will cause increasingly divergent responses.
     
  • Save useful prompt templates. Use a naming convention to build an organized library of standardized prompts tailored to specific recurring use cases. Try having a model write or optimize prompts for you.
     
  • Require cited sources and check them manually. Combatting hallucinations and misinterpretations as a responsible attorney is a duty that will not disappear; use the technology to help you (“point me to your source”) rather than replace you. For those worried about data and response quality deterioration, the marketplace suggests otherwise. Curated, professionally vetted datasets are garnering massive investment, with AI providers as the largest consumers.
     
  • Future-proof the basics in your workflow. AI services and models will change, but basic principles will persevere. Local AI models (entirely stored on your computer or your firm’s servers, rather than hosted by a third party) will become more practical over time, but prompting skills will remain important.

Prioritize Privacy

Assume the worst: AI providers are big tech companies with unprecedented power to extrapolate, infer, and preserve details (true or not) from users and their prompts. Whether or not AI providers route user data to train their AI models, the logging of prompts, user information, and metadata is common. Further, even if hosts truly do not retain any data, there is always the risk of “exploits” where a third party gains unauthorized access and resets user privacy settings, as well as of other ancillary services or tools collecting data. 

Constantly monitoring the privacy policies of an AI system and all its dependencies is impractical, and privacy services hosted by third parties can be vulnerable to many of the same exploit vectors. So, what’s the solution? Users can direct their efforts toward three primary concerns when it comes to privacy: prompt material, logged data, and model-absorbed data.

  • Prompt material. Personal identifying information (PII) and other private, privileged, or otherwise sensitive text should never be provided to AI systems, as they should be considered adverse counterparties. Anonymizing or redacting files and removing metadata first can substantially mitigate these risks. This can be accomplished locally through software on each individual user’s computer or the firm’s internal server.
     
  • Logs. Prompts, metadata, IP addresses, user information, and other details are extensively logged by AI vendors and provider websites, servers, and models. Logs are retained for “optimization” and “safety” purposes for periods ranging from 30 days to two years. Some even require periodic manual deletion by the user rather than auto-removal. In addition to prompt cleansing, VPNs and other security software can assist with minimizing logs.
     
  • Model absorption. User-provided data usually bestows some benefit to the provider, especially for free services. No-cost AI services often leverage user data to train their models, whether for quality improvement, collection of niche data, or aggregation and extrapolation of profiles and data on users or clients. Opt in to private modes and data deletion whenever possible and try local (on-device, offline) AI models if practical. 

However, even a fully on-premises model that accesses zero external data might still need sanitization of prompts. Such action can help to prevent organization-wide access due to security authorizations, conflicts, or ethical walls, and to avoid hosted AI logs and records being used as honeypots for malicious actors. After all, selective data deletion from any AI model, such as information from a terminated client relationship, is extremely complex if not practically impossible.

Of course, if local models have internet access, all of the log concerns apply to externally accessed websites and search engines (many of which are now automatically AI-integrated). 

For third-party-hosted AI services, the level of privacy (by mode default or user preferences) can change with a server-side update or company policy change, often without users’ awareness or consent. Providers can be compelled by legislation, regulation, or judicial order to preserve data and logs despite their policies. (This has already happened in the U.S. District Court for the Southern District of New York. See the U.S. District Court for the Southern District of New York’s May 13, 2025, preservation order in In re: OpenAI, Inc.)

When evaluating legal AI vendors, consider asking:

  • What models are used? Are they off-the-shelf or trained/fine-tuned by your company (if the latter, what is the training set)? How often are the systems updated or replaced? Where are they located?
     
  • What is the vendor’s privacy configuration, and what is controlled by the user? What is used for training, what is logged, and what is the vendor’s retention and deletion mechanism? Is prior notice and consent required for changes to the privacy policy or configurations? 
     
  • Is user data end-to-end encrypted? What third-party dependencies does the vendor have, and what are their privacy configurations? 

Conclusion

Consistently prioritized privacy protections and structured, detailed prompting can quickly unlock quality improvements in your practice with general-purpose AI models and little out-of-pocket expense.

Start with tasks that benefit from creativity, invest time in prompting, don’t be afraid to start over or ask multiple models, and save performant prompt templates. Review AI responses closely and check citations. Most importantly, if PII and privileged information never leave your device, then your workflow will be less susceptible to downstream data privacy issues. 

The lawyerly abilities to ask good questions, be skeptical of responses, use the facts and resources at one’s disposal, and check the law are more important now than ever. Despite the risks, if you ignore the capability of generative AI, you might get outpaced by competitors using AInterns. 

You’re not getting replaced by AI — you’re getting superpowered. 

Erich Dylus is an attorney and programmer. He is the founder of Varia Law, a boutique law and consulting firm, and CamoText, a redaction and anonymization software company.

Skyline