The first beta version of Deepnote launched in Oct 2020 on a simple premise: build a better data science notebook.
We took notice of the leaps in productivity and collaboration changing the way software was built and wanted to bring that same productivity to the much younger data science market. We saw how fast-growing companies like Figma unlocked design beyond design teams, and we knew notebooks could serve as a medium to expand data accessibility and literacy beyond data teams.
At the time, notebooks as the tool of choice for data scientists had many pain points—version control, reproducibility, and collaboration, to name a few. They also didn't easily integrate with the modern data stack. And they didn't encourage engineering best practices.
Ever since launching in beta, we've seen widespread market adoption, with thousands of data teams now using Deepnote. We’ve more than doubled our team size in the last year and we’ve been backed by world’s best investors who believe in our vision of the notebooks.
In short, we’re ready to remove the “beta” label and become generally available for the world’s best data teams.
To address the limitations of notebooks, we built Deepnote with three core principles in mind:
At Deepnote, we see notebooks as a new kind of computational interface. The type of interface that helps data teams solve new types of problems, including both technical and organizational complexity.
Deepnote is an environment that combines powerful compute, fast feedback loops, and hassle-free collaboration.
We built Deepnote so that data teams can focus on problems, not tooling.
To make data science and analytics more accessible, we've built collaboration at the core of every feature we ship. We've shipped a ton to help with collaboration in the past year. Here are a few highlights:
Workspaces: Proper organization of insights is critical to make the learnings accessible. Meaningful data insights often start with an individual, then get reviewed and polished by peers, and finally delivered to stakeholders for action. We built Workspaces with the flexibility to work in any way your team does. You can explore on your own, collaborate together with your data team, or share notebooks across your whole organization. Workspaces allow data teams to create hundreds or even thousands of notebooks without losing track of the most important learnings.
Real-time collaboration: Invite a team member to your project and work together in the same execution environment—all at the same time. The best analytics environments minimize the feedback loops. Reviewing code, commenting on visualizations, or debugging issues all happen in real-time.
Publishing: We made sharing data science notebooks as easy as sending a link (think Google docs for data science). With Deepnote, you have full control over which blocks of your notebook are visible and who has visibility—public, anyone with a link, only your team—no more sending screenshots over email or Slack to share a finding or get feedback on your work.
“Deepnote has been instrumental in centralizing much of our People Analytics work. With Deepnote, we can collaborate on our data pipelines, data science, and analytics efforts all from a single platform. Deepnote's publishing capabilities also provide our internal stakeholders a rich user experience to easily view insights and see where they should take action.”
Scott Jacobsen, Head of People Analytics at Gusto
Part of offering a comprehensive platform for data teams is making it easy to plug into your data stack, and the languages / frameworks data practitioners are familiar with. We want Deepnote to be the place where you focus on meaningful data science work regardless of the specific tools you are using.
Data warehouses: Wherever your data is stored—from your cloud data warehouse to a local CSV—you can easily access it in Deepnote. Connect to Postgres, Snowflake, S3, BigQuery, and dozens of native data integrations.
From Python to SQL and back: Deepnote allows you to work with the right language for the job. You can easily switch between Python and SQL or drop down to Julia or R for more specific tasks. SQL results are instantly saved as Python DataFrames, so you can query your database in SQL, and then use Python to export your data as a CSV or create complex Machine Learning models. You can also create programmatic SQL queries by embedding Python variables and DataFrames in your SQL queries.
We've incorporated the best practices from software engineering—clean code, defining dependencies, reproducible notebooks—to empower underserved data teams. Analysts and data scientists should have the tooling to be just as productive as software engineers. We're adding productivity features to Deepnote practically every day to make that a reality. Here are a few highlights from the last year:
Code intuition: When you write code in Deepnote, we help you focus on the problem at hand with features like autocomplete, surfacing function definitions, and go-to definitions.
Reactivity: Deepnote solves a core problem with traditional computational notebooks: Reactivity. We've designed Deepnote to be fully reactive. Whenever code is changed or a cell is deleted/moved, outputs are automatically updated as if the notebook was executed fresh, from top to bottom.
Scheduling: We added the ability to run notebooks while you're away with scheduling. Execute your notebooks daily, weekly, or hourly to process data or update a dashboard at a regular frequency.
Before Deepnote, we were using notebooks in our local environments. Sharing insights across our org was very difficult. Even though we had a place where all the code lived, collaborating via GitHub was so painstaking we never actually did it. Deepnote allowed us to get on the same page through collaboration, and everyone gets to use their preferred tools.
Allie Russell, Senior Manager of Data Science at Webflow
Because Deepnote is a collaborative product that connects to the data stack, we built it to be both performant and secure. As adoption spreads from a single analyst to an entire organization, data teams have the flexibility to customize machines to maximize performance, and security teams have full control over user permissions and access.
Customize environments: By default, every project in Deepnote runs on a fully-managed cloud-hosted environment, so data teams don't need to worry about initial setup. Everything is secure by default and works out of the box. For use cases requiring more computing or specialization, changing a machine is simple as clicking a button and selecting an option for more memory or GPU.
User permissions: Deepnote admins have full control over who can access and contribute to a workspace, from view only business users to data scientists producing notebooks.
SOC2: Deepnote has SOC 2 Type II certification, which shows that our software development processes, technical security controls, and administrative procedures meet the required levels of oversight and monitoring defined by the industry standard.
While we're proud of what we've built, there's still a long road to reaching our mission. Removing the "beta" label is just a small step to making data science more collaborative and accessible. We'll be accelerating product velocity, doubling the team, and investing even more in supporting our customers and community in the coming year. We envision a notebook that helps everyone work with data. A place where teams can collaborate without barriers. A place where problems get solved.
As we grow, we’ll be looking for more great makers and doers excited about building the next generation of data tooling to join our team. Check out our job openings here →
Share this post
No credit card required. Run your first notebook in seconds.