HomeNews and blogs hub

Executing Python code submitted via a web service

Bookmark this page Bookmarked

Executing Python code submitted via a web service

Author(s)

Mike Jackson

Posted on 8 September 2022

Estimated read time: 10 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

Executing Python code submitted via a web service

Posted by m.jackson on 8 September 2022 - 8:00am

Python codeBy Mike Jackson, Software Architect.

First published on 23/11/17. Updated on 08/09/22.

As part of my (now closed) open call consultancy for LUX-ZEPLIN (LZ), I was asked about the feasibility of developing a web service that accepted Python code from users and executed their code server-side within a Linux environment. In this blog post I give a brief overview of a number of approaches that could be taken to implement such a service, focusing on those that protect the web service, and its underlying server, from code that is, whether by accident or design, malicious.

First things first, developing a web service that accepts Python code from users and runs this server-side is, in itself, it is not technically challenging. Any developer could knock up a proof-of-concept quite rapidly. The challenges are how to ensure that the web service is able to successfully run a user’s code, and how to protect the web service from the user’s code.

The first challenge, how to ensure that the server is able to successfully run a user’s code, can be restated as how to ensure that users only submit code that can successfully run on the server. At its simplest, this can be handled by publishing information about the environment within which the server will run the user’s code (e.g. operating system version, Python interpreter and version, libraries and versions available). This places the onus on users to ensure that their code can run under this environment, before they submit it.

This is the approach taken by, for example, continuous integration services which publish the build-and-test environments available to users (see, for example, GitHub Actions and their runners, Travis CI or AppVeyor).

[Warning: please be aware that Travis CI has a security issue with its Free Tier service. By design, “secret” data such as access credentials are exposed within historical clear-text logs which are accessible by anyone via the Travis CI API.]

The second, and most critical, challenge, is how to protect the web service from the user’s code. For example, the code might have a bug which causes it to go into an infinite loop, consuming the CPU of the server on which it runs; it may be naively implemented with an infinite recursion that devours the server’s memory; or, the code might maliciously try and write large files of junk, consuming available disk space. How do we prevent a user’s code from accidently, or maliciously, affecting the server on which it runs, and so affecting the availability of the web service or the execution of code from other users? There are many options for protecting the availability and security of the web service, and its underlying server, and a selection of these are now surveyed.

Python-specific options

Restricted Python offers another approach to running untrusted code. Rather than create a sandbox or secure environment, it uses customisable policies to determine a restricted subset of the Python language that can be executed. Restricted Python is an open source project hosted in GitHub.

The PyPy Python implementation provides an alternative model for sandboxing Python. PyPy sandboxing offers sandboxing at a level comparable to that offered by operating systems. A trusted Python program (for example, our web service, or a program invoked by it) spawns a subprocess which runs untrusted code (for example the code submitted by a user) using a sandboxed version of PyPy. This version of PyPy serialises all input/output to a standard input/output pipe. The trusted Python program then determines which I/O accesses are permitted or not. Controls on the amount of CPU time and RAM that the untrusted code can consume can also be imposed.

PyPy advise that the library to use this sandboxed PyPy from a trusted Python program is “experimental and unpolished” and also that PyPy sandboxing may not work with PyPy3, PyPy's Python 3 interpreter. In addition, development on this seems to have stalled in June 2020.

Operating system features

There are various Linux operating system features that can be used to control the execution of processes, to restrict the files and other I/O resources a process has access to, and to put constraints on their CPU and memory consumption. These could be used to control the execution of code submitted by users. A selection of these are as follows:

  • chroot can be used to restrict the parts of the file system can be accessed by a non-root process and any sub-processes it spawns. The process cannot access files outwith this restricted file system, which is termed a “chroot jail”.

  • ulimit is a command which can put limits on the CPU, number of processes, number of open files, and memory available to a user. See, for example, RedHat’s How to set ulimit values (2021).

  • seccomp is a kernel facility that can isolate a process from a system’s resources, allowing it to only access open file descriptors and to exit. If the process attempts any other system calls (e.g. to open another file) it is killed. seccomp is used by Docker (see below).

  • seccomp-bpf provides added flexibility to seccomp. It allows filter programs to be written which determine which system calls should be available and to which processes.

  • AppArmor is a kernel security module that allows for access control to network, socket and file resources to be configured and enforced for specific programs.

  • SELinux is kernel security module for access control, which performs a similar function to AppArmor, though with richer, more complex, configuration.

Docker

Docker exploits Linux kernel resource (CPU, memory, block I/O, network) isolation and virtualization features to allow independent “containers” to run on a single Linux server. Each Docker container offers a basic version of Linux. A Docker image - a file system and software - can be loaded into a container and then run. Docker can be configured to limit a container's resources including its CPU and memory resources.

It is possible to set up a web service that receives code from a user, starts up a Docker container loaded with an image that can run the code, runs the user's code, retrieves the outputs, returns these to the user, and shuts down the container. An example of this model in practice was Remote Interview, which used Docker to provide a service to allow job interview candidates to compile and run source code via a web service. A user's code ran within a Docker container with limited resources. Their framework - a mixture of JavaScript and shell scripts - was released on GitHub, as the open source framework CompileBox (though this was last updated in 2017).

A user’s code runs within a Docker container with limited resources. Their framework - a mixture of JavaScript and shell scripts - has been released on GitHub, as the open source framework CompileBox.

Some developers have raised concerns with Docker security. See for example the Stack Exchange discussion on Docker as a sandbox for untrusted code 2015). Remote Interview, in their blog post on How we used Docker to compile and run untrusted code (2016) comment that:

“Docker is good for achieving isolation but not so much in terms of security. However, so far, Docker has managed to do the job for us pretty well.”

Jupyter notebooks

Jupyter notebooks are documents that contain text, images, and embedded Python code. The code can be executed to create content (e.g. tables and images) which is also embedded within the notebooks. Jupyter runs on a web server, and users create, edit and run notebooks via their web browser. MarkDown is used for text formatting, and LaTeX can be used for formatting formulae. Jupyter notebooks are saved as JSON documents. TryJupyter provides a free demonstration.

As an alternative, or a complement, to allowing users to submit code via a web service, users could be allowed to submit and execute their code via Jupyter notebooks.

A Jupyter notebook server can deployed, secured and exposed publicly. However, this deployment model is intended for use by a single user only. For a multi-user service, and using an approach similar to Remote Interview, Jupyter and Rackspace used Docker to provide notebooks on-demand for readers of a November 2014 article in Nature, Interactive notebooks: Sharing the code. As described by Kyle Kelley of RackSpace, in How did we serve more than 20,000 IPython notebooks for Nature readers?, each user’s request spawns a new Docker container with a running Jupyter notebook server. This was implemented using the tmpnb, temporary notebook service, an open source framework available on GitHub. tmpnb allows a deployer to configure the CPU quota and memory limits of each container and how long to wait before closing a notebook down if it is idle. The notebooks are temporary and are purged when a user leaves the notebook’s web page.

Jupyter Hub provides another approach to providing notebooks for multiple users. A Jupyter Hub server spawns, manages and acts as a proxy for multiple single-user Jupyter notebook servers. New single-user Jupyter notebook servers can be spawned as sub-processes on the same host, as Docker containers, on a cluster of physical or virtual machines managed by Kubernetes, or as batch jobs submitted to a cluster via SLURM for example.

Multi-server architecture

Adopting an architecture whereby the web service runs on one server (either a physical or virtual machine) and each user’s code is executed on a separate server (again, either physical or virtual) greatly reduces, or even removes altogether, the risk that a specific user’s code affects either the running of the web service or other users’ code. In this scenario, the web service serves as a dispatcher, copying the user’s code onto a server, monitoring its execution status and returning the results to users. There are many ways in which this could be implemented and many tools and frameworks available. As small selection of these are as follows:

  • Secure shell (SSH) can invoke bash commands on remote servers and secure file transfer (SFTP) or secure copy (SCP) can be used to transfer files to and from the servers.

  • Paramiko is an open source Python implementation of SSH2, which includes an SFTP module.

  • Fabric is an open source Python framework for executing local or remote shell commands, and uploading/downloading files to/from remote hosts.

  • Open source products including SLURM, Kubernetes, or HTCondor support job scheduling and workload management across a cluster of physical or virtual machines.

Postscript (08/12/17)

Mark Woodbridge, Research Software Engineering Team Lead at Imperial College London also suggests systemd-nspawn, which can be used to run a command or operating system in a light-weight namespace container. systemd itself also supports IP Accounting and Access Lists to manage outgoing network access, for example.

Share on blog/article:
Twitter LinkedIn