Running Doom on Snowflake
Getting Doom to run on various things is a fun hobby. While studying for a SnowPro certificate this weekend I kept thinking "Snowflake can do all of these amazing things, but... can it run Doom?". So, a few hours later, here we are.
The building blocks
First of all, we'll need some compute. Data warehouses are not famous for exposing their compute in a way that would allow us to interact with it through a web service, thankfully, Snowflake recently introduced Snowpark Container Services, which – among other things – does exactly that!
Snowpark Container Services (SCS) offers a fully managed container solution within the Snowflake ecosystem. It enables seamless deployment, management, and scaling of containerized applications directly within Snowflake, eliminating the need to move data elsewhere for processing.
Some Cool Features of SCS
- Managed Operations: Snowpark Container Services handles container management intricacies, including security and configuration.
- Snowflake Integration: Connect applications to Snowflake, run SQL queries, access data files, and process data seamlessly.
- Third-Party Compatibility: Easily integrate with tools like Docker for uploading application images.
- Long-Running Services: Create continuous services, similar to web services, and explicitly stop them when not needed.
How it Works
Snowpark Container Services operates with image repositories, compute pools, services, and jobs.
- Image Repository: Store container images in Snowflake's OCIv2-compliant image registry.
- Service: Create long-running services using the
CREATE SERVICE
command. - Job: Execute finite lifespan jobs, like machine learning model training, with
EXECUTE SERVICE
. - Compute Pool: Define compute pools for running services or jobs, supporting GPU-enabled machine types.
- Service Functions: Communicate with services using service functions from SQL queries, configure public endpoints, and support service-to-service communications.
Out of these, we won't be using Jobs or Service Functions for now.
As for the game itself, we'll use a Dockerized version from this repository, which contains the actual web app that runs Doom via JS-DOS through a webserver.
Let's take a look at how we can use all of this to actually get Doom to run on Snowflake.
Prep
To set up the required Snowflake objects, follow these steps using either SnowSQL or Snowsight:
- Login: Log in to Snowflake with a user that has the ACCOUNTADMIN role.
- Execute Preparation Script: Using the ACCOUNTADMIN role, execute the following script. Replace
<user_name>
with the name of the Snowflake user who will be playing the game:
USE ROLE ACCOUNTADMIN;
CREATE ROLE doom_role;
CREATE DATABASE IF NOT EXISTS doom_db;
GRANT OWNERSHIP ON DATABASE doom_db TO ROLE doom_role;
CREATE OR REPLACE WAREHOUSE doom_wh WITH
WAREHOUSE_SIZE='X-SMALL';
GRANT USAGE ON WAREHOUSE doom_wh TO ROLE doom_role;
CREATE SECURITY INTEGRATION IF NOT EXISTS doom_snowservices_ingress_oauth
TYPE=oauth
OAUTH_CLIENT=snowservices_ingress
ENABLED=true;
GRANT BIND SERVICE ENDPOINT ON ACCOUNT TO ROLE doom_role;
CREATE COMPUTE POOL doom_compute_pool
MIN_NODES = 1
MAX_NODES = 1
INSTANCE_FAMILY = CPU_X64_XS;
GRANT USAGE, MONITOR ON COMPUTE POOL doom_compute_pool TO ROLE doom_role;
GRANT ROLE doom_role TO USER <your_user>;
USE ROLE doom_role;
USE DATABASE doom_db;
USE WAREHOUSE doom_wh;
CREATE SCHEMA IF NOT EXISTS doom_schema;
CREATE IMAGE REPOSITORY IF NOT EXISTS doom_repository;
- Notes:
- Create a warehouse for executing SQL DML statements.
- In tutorial 1, a service is created with a public endpoint. For this,
doom_role
must have the BIND SERVICE ENDPOINT privilege on the account. - A security integration is required for the current implementation.
- A compute pool is set up to run jobs and services.
- An image repository is created to store service code (aka. Doom itself).
Build and Publish Doom
Before proceeding, ensure you have the necessary repository information. Follow these steps:
- Get Repository Information: Execute the following SQL command to retrieve the repository URL:
SHOW IMAGE REPOSITORIES;
Look for the repository_url
column in the output, which provides the URL. Example:
<orgname>-<acctname>.registry.snowflakecomputing.com/doom_db/doom_schema/doom_repository
The hostname in the repository URL is the registry hostname. Example:
<orgname>-<acctname>.registry.snowflakecomputing.com
Now, proceed to build and upload the image.
- Build and Upload:
- Clone this repository and cd into the fresh folder on your machine.
- To build a Docker image, use the following command with the Docker CLI. Note that the command specifies the current working directory (.) as the path for building the image:
docker build --rm --platform linux/amd64 -t <repository_url>/<image_name> .
For <image_name>
, use doom:latest
.
Example:
docker build --rm --platform linux/amd64 -t myorg-myacct.registry.snowflakecomputing.com/doom_db/doom_schema/doom_repository/doom:latest .
c. Authenticate Docker with the Snowflake registry. Use the following command:
docker login <registry_hostname> -u <username>
Replace <username>
with your Snowflake username. Docker will prompt you for your password.
d. Finally, upload the image to the repository in your Snowflake account:
docker push <repository_url>/<image_name>
Example:
docker push myorg-myacct.registry.snowflakecomputing.com/doom_db/doom_schema/doom_repository/doom:latest
Proceed with the next steps once the image is successfully uploaded.
Expose Service
To verify that every resource in the previous step has been created, you can use these commands:
USE ROLE doom_role;
USE DATABASE doom_db;
USE SCHEMA doom_schema;
USE WAREHOUSE doom_warehouse;
SHOW COMPUTE POOLS;
SHOW WAREHOUSES;
SHOW IMAGE REPOSITORIES;
To set up your service, you'll need:
- Compute Pool: Snowflake runs your service in the compute pool you've designated. You've already created this pool as part of the initial setup.
- Service Specification: This provides Snowflake with the necessary details to configure and operate your service. You can include this specification directly in the CREATE SERVICE command. Alternatively, you can save it to a file in your Snowflake stage and reference the file in the command.
Before proceeding, ensure:
- Confirm that your compute pool is ready by running
DESCRIBE COMPUTE POOL
and checking for an ACTIVE or IDLE state. If it's still STARTING, wait until it changes. - You're in the correct context by executing the following:
USE ROLE doom_role;
USE DATABASE doom_db;
USE SCHEMA doom_schema;
USE WAREHOUSE doom_warehouse;
Now, let's create the service:
CREATE SERVICE doom_service
IN COMPUTE POOL doom_compute_pool
FROM SPECIFICATION $$
spec:
containers:
- name: doom
image: /doom_db/doom_schema/doom_repository/doom:latest
endpoints:
- name: doomendpoint
port: 8080
public: true
$$
MIN_INSTANCES=1
MAX_INSTANCES=1;
To get detailed information about the service we created to verify the status and see logs of the container:
- List services in your account with
SHOW SERVICES
. - Check the status of your service using
SYSTEM$GET_SERVICE_STATUS('doom_service')
. - Obtain information about your service via
DESCRIBE SERVICE doom_service
. - Check logs with
CALL SYSTEM$GET_SERVICE_LOGS('doom_service', '0', 'doom', 10);
Ready!
You can interact with the service using a web browser. Since the service exposes its endpoint publicly (as specified in the CREATE SERVICE
command), you can access a web UI provided by the service over the internet and send requests from your browser.
To find the URL of the public endpoint:
SHOW ENDPOINTS IN SERVICE doom_service;
Look for the ingress_url
column in the response, which will provide the URL.
Example:
xyzab-myorg-myacct.snowflakecomputing.app
After navigating to the URL, you'll need to log in to Snowflake (via a user that has the necessary permissions to access the service, such as the one you used during the creation) – and that's it! Happy Dooming!
Member discussion