PandasAI 3.0 is currently in beta. This documentation reflects the latest
features and functionality, which may evolve before the final release.
Code Execution and Sandbox Environment
PandasAI executes Python code that is generated by Large Language Models (LLMs). While this provides powerful data analysis capabilities, it’s crucial to understand the security implications, especially in production use cases where your application might be exposed to potential malicious attacks.Why Use a Sandbox?
When building applications that allow users to interact with PandasAI, there’s a potential risk that malicious users might attempt to manipulate the LLM into generating harmful code. To mitigate this risk, PandasAI provides a secure sandbox environment with the following features:- Isolated Execution: Code runs in a completely isolated Docker container
- Offline Operation: The sandbox runs entirely offline, preventing any external network requests
- Resource Limitations: Strict controls on system resource usage
- File System Isolation: Protected access to the file system
Using the Sandbox
To use the sandbox environment, you first need to install the required package and have Docker running on your system:Make sure you have Docker running on your system before using the sandbox
environment.
When to Use the Sandbox
We strongly recommend using the sandbox environment in the following scenarios:- Building public-facing applications
- Processing untrusted user inputs
- Deploying in production environments
- Handling sensitive data
- Multi-tenant environments
Enterprise Sandbox Options
For production-ready use cases, we offer several advanced sandbox options as part of our Enterprise license. These include:- Custom security policies
- Advanced resource management
- Enhanced monitoring capabilities
- Additional isolation layers