FAQs

FAQs

FAQs

Answers to some of the most frequently asked questions

We grouped questions by topic for easier discoveribility

General FAQ

General questions about our software

What is Promptial and how does it work?

Promptial is a comprehensive observability platform designed specifically for AI prompts and LLM interactions. It works by capturing, logging, and analyzing every interaction between your application and language models. You integrate our SDK into your codebase, and we automatically track prompt versions, response quality, token usage, latency, and costs across all your AI operations.

How can I use Promptial — API, SDK, CLI, or UI?

Any functionality that's available in the UI is also exposed through the API, SDK, and CLI applications so you can use Promptial however it fits your workflow:

  • 🔌 SDK – Integrate directly into your app using our lightweight SDK. Log prompts, responses, costs, and metadata with automatically.

  • 💻 CLI – Run tests, version prompts, and diff outputs from the terminal. Great for local dev workflows and CI pipelines.

  • 🌐 UI – Use the visual dashboard to explore traces, compare prompt versions, monitor drift, and view cost/performance analytics.

  • 📡 API – Our REST API lets you push traces, query usage, or trigger version rollbacks — perfect for deeper automation or platform integration.

You can mix and match — for example, use the SDK in production and the CLI in CI while your team monitors everything through the dashboard.

What kind of analytics and insights does Promptial provide?

Promptial provides comprehensive analytics including cost breakdowns by model and project, latency trends, token usage patterns, prompt performance comparisons, model drift detection, and custom dashboards. You can track metrics like success rates, error patterns, and user satisfaction scores to optimize your AI applications.

Do I need to change my code?

No. You can use Promptial as a wrapper for OpenAI, Anthropic, LangChain, or other tools — or integrate it via SDK with minimal changes.

Does Promptial support self-hosted models or custom base URLs?

Yes, Promptial supports using your own self-hosted models, from providers like HuggingFace, or Azure OpenAI and more.

What is Promptial and how does it work?

Promptial is a comprehensive observability platform designed specifically for AI prompts and LLM interactions. It works by capturing, logging, and analyzing every interaction between your application and language models. You integrate our SDK into your codebase, and we automatically track prompt versions, response quality, token usage, latency, and costs across all your AI operations.

How can I use Promptial — API, SDK, CLI, or UI?

Any functionality that's available in the UI is also exposed through the API, SDK, and CLI applications so you can use Promptial however it fits your workflow:

  • 🔌 SDK – Integrate directly into your app using our lightweight SDK. Log prompts, responses, costs, and metadata with automatically.

  • 💻 CLI – Run tests, version prompts, and diff outputs from the terminal. Great for local dev workflows and CI pipelines.

  • 🌐 UI – Use the visual dashboard to explore traces, compare prompt versions, monitor drift, and view cost/performance analytics.

  • 📡 API – Our REST API lets you push traces, query usage, or trigger version rollbacks — perfect for deeper automation or platform integration.

You can mix and match — for example, use the SDK in production and the CLI in CI while your team monitors everything through the dashboard.

What kind of analytics and insights does Promptial provide?

Promptial provides comprehensive analytics including cost breakdowns by model and project, latency trends, token usage patterns, prompt performance comparisons, model drift detection, and custom dashboards. You can track metrics like success rates, error patterns, and user satisfaction scores to optimize your AI applications.

Do I need to change my code?

No. You can use Promptial as a wrapper for OpenAI, Anthropic, LangChain, or other tools — or integrate it via SDK with minimal changes.

Does Promptial support self-hosted models or custom base URLs?

Yes, Promptial supports using your own self-hosted models, from providers like HuggingFace, or Azure OpenAI and more.

What is Promptial and how does it work?

Promptial is a comprehensive observability platform designed specifically for AI prompts and LLM interactions. It works by capturing, logging, and analyzing every interaction between your application and language models. You integrate our SDK into your codebase, and we automatically track prompt versions, response quality, token usage, latency, and costs across all your AI operations.

How can I use Promptial — API, SDK, CLI, or UI?

Any functionality that's available in the UI is also exposed through the API, SDK, and CLI applications so you can use Promptial however it fits your workflow:

  • 🔌 SDK – Integrate directly into your app using our lightweight SDK. Log prompts, responses, costs, and metadata with automatically.

  • 💻 CLI – Run tests, version prompts, and diff outputs from the terminal. Great for local dev workflows and CI pipelines.

  • 🌐 UI – Use the visual dashboard to explore traces, compare prompt versions, monitor drift, and view cost/performance analytics.

  • 📡 API – Our REST API lets you push traces, query usage, or trigger version rollbacks — perfect for deeper automation or platform integration.

You can mix and match — for example, use the SDK in production and the CLI in CI while your team monitors everything through the dashboard.

What kind of analytics and insights does Promptial provide?

Promptial provides comprehensive analytics including cost breakdowns by model and project, latency trends, token usage patterns, prompt performance comparisons, model drift detection, and custom dashboards. You can track metrics like success rates, error patterns, and user satisfaction scores to optimize your AI applications.

Do I need to change my code?

No. You can use Promptial as a wrapper for OpenAI, Anthropic, LangChain, or other tools — or integrate it via SDK with minimal changes.

Does Promptial support self-hosted models or custom base URLs?

Yes, Promptial supports using your own self-hosted models, from providers like HuggingFace, or Azure OpenAI and more.

Pricing questions

Pricing-related questions our customers ask the most

What counts as a "prompt run" for billing purposes?

A prompt run represents a single LLM interaction consisting of a prompt and its corresponding completion. Each trace includes the input prompt, output completion, metadata (tokens, latency, cost), and any custom tags you've added. Traces are billed when they're logged to Promptial, not when viewed or analyzed.

Do you offer discounts for large enterprise clients?

Yes, we offer customized pricing plans for large enterprise clients to ensure that our solution meets their specific needs at the best value. Discounts are available based on the number of users, the length of the subscription, and the scope of features required. Enterprise clients can also benefit from dedicated support, custom integrations, and training sessions. For a personalized quote and to discuss your enterprise needs, please reach out to our team via hello@promptial.ai

Do beta users get a discount?

Yes, if you join the beta, you’ll lock in a lifetime discount on any paid tier, even after launch. To request access to our beta here


Do you offer annual billing discounts?

Yes! We offer a 20% discount on all plans when you choose annual billing. This applies to both the base monthly fee and any additional add on charges.

What is your refund policy?

Our refund policy is designed with your satisfaction in mind. If you're not entirely happy with Promptial within the first 30 days of your subscription, you can request a full refund. To initiate a refund, please contact our support team with your account details and the reason for your request. We aim to process all refunds within 2-5 business days. Please note that this policy applies to both monthly and annual plans, giving you peace of mind when subscribing to Promptial.

What counts as a "prompt run" for billing purposes?

A prompt run represents a single LLM interaction consisting of a prompt and its corresponding completion. Each trace includes the input prompt, output completion, metadata (tokens, latency, cost), and any custom tags you've added. Traces are billed when they're logged to Promptial, not when viewed or analyzed.

Do you offer discounts for large enterprise clients?

Yes, we offer customized pricing plans for large enterprise clients to ensure that our solution meets their specific needs at the best value. Discounts are available based on the number of users, the length of the subscription, and the scope of features required. Enterprise clients can also benefit from dedicated support, custom integrations, and training sessions. For a personalized quote and to discuss your enterprise needs, please reach out to our team via hello@promptial.ai

Do beta users get a discount?

Yes, if you join the beta, you’ll lock in a lifetime discount on any paid tier, even after launch. To request access to our beta here


Do you offer annual billing discounts?

Yes! We offer a 20% discount on all plans when you choose annual billing. This applies to both the base monthly fee and any additional add on charges.

What is your refund policy?

Our refund policy is designed with your satisfaction in mind. If you're not entirely happy with Promptial within the first 30 days of your subscription, you can request a full refund. To initiate a refund, please contact our support team with your account details and the reason for your request. We aim to process all refunds within 2-5 business days. Please note that this policy applies to both monthly and annual plans, giving you peace of mind when subscribing to Promptial.

What counts as a "prompt run" for billing purposes?

A prompt run represents a single LLM interaction consisting of a prompt and its corresponding completion. Each trace includes the input prompt, output completion, metadata (tokens, latency, cost), and any custom tags you've added. Traces are billed when they're logged to Promptial, not when viewed or analyzed.

Do you offer discounts for large enterprise clients?

Yes, we offer customized pricing plans for large enterprise clients to ensure that our solution meets their specific needs at the best value. Discounts are available based on the number of users, the length of the subscription, and the scope of features required. Enterprise clients can also benefit from dedicated support, custom integrations, and training sessions. For a personalized quote and to discuss your enterprise needs, please reach out to our team via hello@promptial.ai

Do beta users get a discount?

Yes, if you join the beta, you’ll lock in a lifetime discount on any paid tier, even after launch. To request access to our beta here


Do you offer annual billing discounts?

Yes! We offer a 20% discount on all plans when you choose annual billing. This applies to both the base monthly fee and any additional add on charges.

What is your refund policy?

Our refund policy is designed with your satisfaction in mind. If you're not entirely happy with Promptial within the first 30 days of your subscription, you can request a full refund. To initiate a refund, please contact our support team with your account details and the reason for your request. We aim to process all refunds within 2-5 business days. Please note that this policy applies to both monthly and annual plans, giving you peace of mind when subscribing to Promptial.

Software Question

Software questions our customers ask the most

How does prompt versioning work?

Prompt versioning works like Git for your prompts. Each change creates a new version with a unique identifier. You can tag versions, create branches for experiments, compare performance across versions, and roll back to previous versions instantly. You can integrate automated prompt deployment and A/B testing into your existing CI/CD workflows.

Can I import existing prompts from my application?

Yes, you can bulk import prompts via our API, SDK, CLI, or web interface. We also offer custom migration services for enterprises with large prompt libraries.

How does the SDK integrate with my existing AI stack?

Our SDK works as a lightweight wrapper around OpenAI, Anthropic, and other LLM providers. Add one line to log prompts and responses automatically. We also have native integrations with LangChain, LlamaIndex, and other frameworks.

Can I control who has access to my data within Promptial?

Yes, Promptial allows you to set granular access controls, ensuring that only authorized users within your organization can access or modify sensitive data. You can customize permissions based on roles, ensuring that team members have access only to the data they need for their specific tasks.

What measures does Promptial take to ensure data encryption?

Promptial employs state-of-the-art encryption technologies, including SSL/TLS for data in transit and AES-256 for data at rest, ensuring that all your data remains secure and inaccessible to unauthorized parties. This level of encryption safeguards your information, whether it's being sent to or stored on our servers.

How does prompt versioning work?

Prompt versioning works like Git for your prompts. Each change creates a new version with a unique identifier. You can tag versions, create branches for experiments, compare performance across versions, and roll back to previous versions instantly. You can integrate automated prompt deployment and A/B testing into your existing CI/CD workflows.

Can I import existing prompts from my application?

Yes, you can bulk import prompts via our API, SDK, CLI, or web interface. We also offer custom migration services for enterprises with large prompt libraries.

How does the SDK integrate with my existing AI stack?

Our SDK works as a lightweight wrapper around OpenAI, Anthropic, and other LLM providers. Add one line to log prompts and responses automatically. We also have native integrations with LangChain, LlamaIndex, and other frameworks.

Can I control who has access to my data within Promptial?

Yes, Promptial allows you to set granular access controls, ensuring that only authorized users within your organization can access or modify sensitive data. You can customize permissions based on roles, ensuring that team members have access only to the data they need for their specific tasks.

What measures does Promptial take to ensure data encryption?

Promptial employs state-of-the-art encryption technologies, including SSL/TLS for data in transit and AES-256 for data at rest, ensuring that all your data remains secure and inaccessible to unauthorized parties. This level of encryption safeguards your information, whether it's being sent to or stored on our servers.

How does prompt versioning work?

Prompt versioning works like Git for your prompts. Each change creates a new version with a unique identifier. You can tag versions, create branches for experiments, compare performance across versions, and roll back to previous versions instantly. You can integrate automated prompt deployment and A/B testing into your existing CI/CD workflows.

Can I import existing prompts from my application?

Yes, you can bulk import prompts via our API, SDK, CLI, or web interface. We also offer custom migration services for enterprises with large prompt libraries.

How does the SDK integrate with my existing AI stack?

Our SDK works as a lightweight wrapper around OpenAI, Anthropic, and other LLM providers. Add one line to log prompts and responses automatically. We also have native integrations with LangChain, LlamaIndex, and other frameworks.

Can I control who has access to my data within Promptial?

Yes, Promptial allows you to set granular access controls, ensuring that only authorized users within your organization can access or modify sensitive data. You can customize permissions based on roles, ensuring that team members have access only to the data they need for their specific tasks.

What measures does Promptial take to ensure data encryption?

Promptial employs state-of-the-art encryption technologies, including SSL/TLS for data in transit and AES-256 for data at rest, ensuring that all your data remains secure and inaccessible to unauthorized parties. This level of encryption safeguards your information, whether it's being sent to or stored on our servers.

Are you ready to catch prompt failures before your users do?

Join now for early access to the exclusive beta and be the first to unlock full prompt observability.

Are you ready to catch prompt failures before your users do?

Join now for early access to the exclusive beta and be the first to unlock full prompt observability.

Are you ready to catch prompt failures before your users do?

Join now for early access to the exclusive beta and be the first to unlock full prompt observability.