Mask Prompt
Mask sensitive information in your prompts before sending them to a third-party LLM.
When to use
Use this method on your prompt if it is at risk of containing PII or sensitive data. Our SDK will mask any sensitive information and send you an updated prompt. After receiving the response back from your LLM, use our provided unmasking function to retrieve your unmasked data.
You can read about how it works here.
Usage
Function Parameters
Array of objects, each representing a message in the LLM conversation chain.
Response
The maskPrompt
method will return a Promise
that resolves to an array with exactly 2 values:
This array is an exact clone of the messages array but with all PII and sensitive data replaced with templated variable names.
This is a function that can be used on the LLM response to unmask the data.
The function takes in the LLM response and returns the same response but with the original PII and sensitive data restored.
The LLM response can be formatted as either:
- A raw OpenAI chat completion object, or
- A string with the raw response content
Providing any other format of data to the unmaskResponse
function will result in an error being thrown.
Note: The unmask function is only valid for the specific masked prompt that was provided. It cannot be used to unmask other prompts.