Understanding security implications and sandbox options in PandasAI
PandasAI 3.0 is currently in beta. This documentation reflects the latest features and functionality, which may evolve before the final release.
PandasAI executes Python code that is generated by Large Language Models (LLMs). While this provides powerful data analysis capabilities, it’s crucial to understand the security implications, especially in production use cases where your application might be exposed to potential malicious attacks.
When building applications that allow users to interact with PandasAI, there’s a potential risk that malicious users might attempt to manipulate the LLM into generating harmful code. To mitigate this risk, PandasAI provides a secure sandbox environment with the following features:
To use the sandbox environment, you first need to install the required package and have Docker running on your system:
Make sure you have Docker running on your system before using the sandbox environment.
Here’s how to enable the sandbox for your PandasAI chat:
We strongly recommend using the sandbox environment in the following scenarios:
For production-ready use cases, we offer several advanced sandbox options as part of our Enterprise license. These include:
If you need more information about our Enterprise sandbox options or require assistance with implementation, please contact us at pm@sinaptik.ai. Our team can help you choose and configure the right security solution for your specific use case.