Learn how to use Quotient Detections to automatically capture hallucinations and evaluate document relevancy.
log_id
(string): The log ID of the log you want to poll for detections.timeout
(int): The maximum time to wait for a response in seconds. Defaults to 300.poll_interval
(float): The interval between checks in seconds. Defaults to 2.0.Log
(object): A Log
object containing the following fields:
id
(string): Unique identifier for the log entry.app_name
(string): Name of the application that generated the log.environment
(string): Environment where the log was generated (e.g., “dev”, “prod”).detections
(array): List of detection types that were configured for this log.detection_sample_rate
(float): Sample rate used for detections on this log.user_query
(string): The original user query or prompt that was logged.model_output
(string): The model’s response that was logged.documents
(array): List of documents used as context for the model. Can be strings or LogDocument objects.message_history
(array): Previous messages in the conversation, following the OpenAI message format.instructions
(array): List of instructions provided to the model.tags
(object): Dictionary of tags associated with the log entry.created_at
(datetime): Timestamp when the log was created.status
(string): Current status of the log entry.has_hallucination
(boolean): Whether the model output was detected to contain hallucinations.doc_relevancy_average
(float): Average relevancy score for the documents provided.updated_at
(datetime): Timestamp when the log was last updated.evaluations
(array): List of evaluation results for the log entry.What is an Extrinsic Hallucination?
user_query
(what the user asked)documents
(retrieved evidence)message_history
(prior turns in the conversation)user_query
.relevant
irrelevant
relevant_documents / total_documents
.