LLM best practices
AI Hub relies on large language models (LLMs) to understand and process documents. When you use AI Hub, be aware of these inherent risks associated with LLMs:
-
Information reliability: LLM responses can fabricate content that might sound reasonable but is nonsensical or inaccurate. Even when drawing responses from trusted sources, responses might misrepresent the content.
-
Quality of service: LLMs are trained primarily on English text and images with English text descriptions, and as such, performance is lower on content in other languages.
-
Allocation: LLMs can be used in ways that lead to unfair allocation of resources or opportunities. For example, automated resume screening systems can withhold employment opportunities from one gender if the systems are trained on resume data that reflects the existing gender imbalance in a particular industry.
-
Stereotyping: LLMs can reinforce stereotypes. For example, when translating “He is a nurse” and “She is a doctor” into a genderless language such as Turkish, and then translating the Turkish back into English, many machine translation systems yield the stereotypical (and incorrect) results of “She is a nurse” and “He is a doctor.”
With these risks in mind, take precautions to minimize impact, and rely on human review by licensed professionals, particularly in these scenarios:
-
High-stakes domains or industries, including healthcare, medicine, finance, or law.
-
When use or misuse of the system could result in physical or psychological injury to an individual. For example, scenarios that diagnose patients or prescribe medications have the potential to cause significant harm.
-
When use or misuse of the system could have consequential impacts on life opportunities or legal status. Examples include scenarios where the AI system could affect an individual’s legal status, legal rights, or their access to credit, education, employment, healthcare, housing, insurance, social welfare benefits, services, opportunities, or the terms on which such access is provided.