Many organizations are scrambling to upgrade their products and services with generative AI features. That requires a large language model (LLM). What should you and your CIO consider when selecting an LLM and supporting software? Here are some ideas.
LLM vendor selection
Most organizations will license a vendor LLM and supporting software to upgrade their products and services rather than build their own LLM and supporting software.
Organizations can evaluate the following criteria to mitigate the risk of selecting an inappropriate LLM vendor. However, many of these vendors will be quite new organizations with little track record. That creates difficult-to-mitigate vendor risks.
Vendor evaluation
Project teams can thoroughly assess potential LLM vendors and their software development practices before engaging with them to reduce the risk of contracting with an inadequate vendor. Evaluate vendor’s:
- Track record with other clients.
- Software architecture for cybersecurity defenses.
- Company financial viability.
- Software defect management process.
- Cybersecurity defenses.
- Data handling procedures with an emphasis on deletion.
- Recent audits or assessments they have undergone.
Given the newness of many potential LLM vendors, investing in a contingency plan may be prudent in case the selected LLM vendor experiences a terminal event.
Data usage agreement
Project teams can establish a comprehensive data usage agreement with the successful LLM vendor to reduce the risk of a vendor-caused data breach. Key considerations include:
- Define the rights and responsibilities regarding data access, storage and protection.
- Define and test the incident response and recovery processes.
- Ensure the agreements align with your organization’s data privacy policies and regulatory requirements.
Secure data transmission
Project teams can implement secure channels for transmitting data to and from the LLM vendor to reduce the risk of data loss. Utilize encryption protocols, secure file transfer methods and data loss prevention mechanisms to safeguard data during transit.
LLM software selection
Organizations can evaluate the following topics to mitigate the risk of selecting inappropriate LLM software. However, most LLM software will have few customers and little track record. That creates difficult-to-mitigate software risks.
Functionality selection
Project teams can thoroughly evaluate the functionality of shortlisted LLMs and related software to reduce the risk of contracting for an inadequate or inappropriate LLM. Evaluation criteria to compare LLMs can include:
- References from other customers.
- Reviews on various websites.
- Available vendor support.
- Helpfulness of the customer community.
- Accuracy of the results generated.
- Speed or inference time to display results.
- Accuracy of grammar in results.
- Readability of results.
- Context length or limitations on prompt and results length.
- Model size – smaller tends to be faster, while larger is more precise.
- Adaptability to your domain and planned tasks.
- Quality and diversity of the training data.
- Indicators of bias in results.
- Bias detection and mitigation features.
- Explainability and availability of sources for the inferences.
- Guardrails for safety and responsibility.
- Degree of context understanding.
- Frequency of updates with recent information.
- Operating cost.
- Level of detailed and structured prompt engineering required to produce results.
Project teams will produce more comparable and objective evaluation results when completing a detailed LLM questionnaire rather than relying on general impressions.
Data anonymization and minimization
Project teams can evaluate software functionality to anonymize or minimize the amount of sensitive data shared with the LLM whenever possible. Reduce the risk of a privacy breach further by:
- Limiting the model’s access to personally identifiable information (PII).
- Using anonymized or synthetic datasets for software testing and staff training.
- Delete virtual machines that are no longer needed.
- Suspend virtual machines that are currently idle but will be required again soon.
Software stability risks
Expect that the LLM vendor’s software is brand new and has not been tested rigorously. The paint is likely still drying. Vendors are working overtime to add functionality to their products as LLMs advance rapidly. To mitigate the risks of basing your project on unstable software, the project team should:
- Budget to test software thoroughly.
- Expect to install multiple releases of software during the course of the project.
- Monitor the vendor’s software release notes regularly.
- Ensure that the team can roll back software to a previous version.
- Only promote software from test to production when the end-user acceptance team is satisfied that it works reliably.
Software customization risks
Don’t customize LLM software. It’s expensive and problem-prone. The biggest cost is re-applying the customizations for each new software version the vendor provides. This risk can be addressed by:
- Ensuring that the project team develops a comprehensive list of selection criteria to evaluate software packages. This list mitigates the risk of choosing software that won’t fit the requirements.
- Including a statement in the project charter that the organization will adopt the business processes implicit in selected software packages.
- Including a statement in the project charter that the project team will not customize software packages.
- Participating in software vendors’ customer advisory groups to propose new functionality your organization needs.
Do not confuse configuring software with customizing software. Configuring software is about setting values for variables the software package offers to tailor its operation. Customizing software is about writing and integrating new source code into the software package.
Project teams can deliver successful AI projects by choosing the best-fit LLM and mitigating project risks.
What ideas can you contribute to help organizations select the right LLM? We’d love to hear your opinion. You can share that with us below. Select the checkmark for agreement or the X for disagreement. In either case, you’ll be asked if you also want to send your comments directly to our editorial team.