Deepnote
IntegrationsPricing
Join usDocsSign in

Getting started

Deepnote documentation

Environment

Integrations

Snowpark

Snowpark

Available to teams and users on all plans.

snowpark integration

Want to get started right away? Click here to jump into a ready-to-go notebook that demonstrates Snowpark

With the new Snowpark integration in Deepnote you can perform database transformations with Python and deploy machine learning models to Snowflake—without moving your data or changing the Python code you already use.

The Snowpark + Deepnote integration makes the warehouse feels like an in-memory object. Simply write your code in Deepnote and manipulate your tables as if you were using Pandas. All compute occurs directly in the warehouse so there’s no need to constantly move your data around.

How to connect

Begin by pip installing Snowpark inside of Deepnote. Note that Snowpark requires Python 3.8 (this can be selected from the environments tab in Deepnote).

!pip install snowflake-snowpark-python

Then, instantiate Snowpark's session object. Its methods allow you to manipulate and interact with your Snowflake instance.

from snowflake.snowpark.session import Session

# create the session
session = Session.builder.configs(credientials).create()

Note that credentials refers to a Python dictionary containing your warehouse details. Deepnote stores these securely with its Environment Variables integration.

How to use

The Snowpark provides a Pandas-like API and methods that correspond to the hundreds of SQL functions available in Snowflake. For a full list of functions including how to define your own functions (i.e., UDFs), see the API reference here.

Snowpark operations are executed lazily on the server, which reduces the amount of data transferred between your client and the Snowflake database.

Viewing a table's rows

session.table("my_table").sample(n=50)

Joining two tables

dfDemo = session.table("DEMOGRAPHICS") 
dfServ = session.table("SERVICES")
dfJoin = dfDemo.join(dfServ,dfDemo.col("CUSTOMERID") == dfServ.col("CUSTOMERID"))

Converting a table to a Pandas DataFrame

# this will bring a copy of the table into the notebook's memory
df = dfJoin.to_pandas()

Write a new table to the warehouse

dfJoin.write.mode('overwrite').saveAsTable('MY_NEW_TABLE')

Calculating the average of a column

# import the avg function
from snowflake.snowpark.functions import avg

# calculate average and return the result with .show()
df.select(avg("my_column")).show()

Next steps

To learn more about the many things you can do in Snowpark, including deploying machine learning models, see this tutorial notebook and read the Snowpark documentation.

Still need help? Deepnote's community of over 5,000 data enthusiasts and professionals is always there for you. Post any questions into the Q&A channel here.

Deepnote
Product
© 2022 Deepnote. All rights reserved.