The AI Assistant Showdown: Databricks Genie Code vs Snowflake Cortex Code
Time to read: 20 minutes
Introduction
What interesting times to live in! Now we can vibe-code data & AI solutions in leading data platforms - Databricks and Snowflake! Recently both vendors introduced their AI assistants - Databricks Genie Code and Snowflake Cortex Code (CoCo). Well, not exactly that - Genie Code is a successor (rebranding?) of previously introduced Databricks Assistant. Still, both assistants are new, hot and heavily tested by customers, partners and the community, so I couldn't miss the opportunity to run some experiments and see how both assistants can help in performing different tasks on both platforms.
In this article I'll share my observations and findings from these experiments.
Experiments
Some time ago I wrote about projects for which I used Cortex Code. My vibe-coding experiments aimed at both Snowflake and Databricks included:
Notes from the field
Here are my observations:
💡 Start. Both tools are out-of-the-box available in web UI, so it's very easy to start using them. Just click one icon in the UI and you can start chatting with the assistant.
💡 Access control. While getting started with both tools is easy, disabling either one is a different matter (not every enterprise organization will want to have AI tools enabled for their users). Cortex Code can be disabled for specific users using RBAC (simply revoke specific database roles from user's roles). For Genie Code there is no straightforward way to disable it for specific users. The only option seems to be disabling the Partner-powered AI features option on the account or workspace level which is not dedicated just to Genie Code. Both tools work in the current user's context when it comes to performing operations on the platform.
💡 Command line. In addition to Cortex Code in Snowsight (web UI), Snowflake offers Cortex Code CLI - a command line interface that runs in a local shell. The CLI gives opens many ways of working with Cortex Code - local development, IDE integration (e.g. with VS Code or Cursor), local file system access (e.g. for uploading files to Snowflake stages or using your own libraries of skills), cross-system pipeline integration (support for dbt and Airflow), git integration, subagents working on a project in parallel, hooks (intercept and customize CoCo’s behavior at key lifecycle points), model choice, working on different Snowflake accounts (you can even select two different accounts for a single CoCo CLI session context - one for CoCo inference, one for performing SQL tasks). As a result, with CoCo CLI you can implement much more complex projects (e.g. recently I played with preparing new Snowflake accounts for a green field customer from regulated industry - the result of the project was a set of parametrized Terraform and SQL scripts containing a full account setup including integrations with 3rd party services), also using tools and platforms other than Snowflake. Note: to work with Cortex Code CLI you need either commercial Snowflake account or a dedicated Cortex Code CLI Trial account (warning - credit card required!). Genie Code does not provide a CLI which I find a bit disappointing (the closest to CoCo CLI seems to be the Databricks AI Dev Kit but it requires you to use 3rd party tools like Claude Code or Cursor).
💡 Customization. Both tools offer customization. Genie Code allows adding MCP servers, user and workspace instructions, user and workspace skills and Serverless Usage Policy (usage tagging) - all in one Settings pane. Cortex Code in Snowsight allows adding user instructions (AGENTS.md file in user's workspace) and personal (user's) skills. Cortex Code CLI allows adding skills, subagents, hooks and MCP servers.
💡 Assistant modes. Genie Code works in one of two modes: 1) Agent (default and recommended) which can automate multi-step workflows, plan a solution, retrieve relevant assets, run code, use cell outputs to improve results, fix errors automatically, and more, 2) Chat which answers questions and generate code within the chat (which can be run by the user). Cortex Code also works in two modes: 1) Execution - analogous to Agent mode in Genie Code, 2) Plan - in this mode CoCo takes some time to prepare a comprehensive step-by-step plan before taking actions.
💡 LLM models. Cortex Code allows user to choose the LLM model to work with (as I'm writing this - different Claude models + OpenAI GPT 5.2). Unfortunately, most of these models are available only in AWS which means you may need to enable cross-region inference to have them available for your Snowflake account located in other clouds (typically, setting the CORTEX_ENABLED_CROSS_REGION option to AWS_EU does the thing). Genie Code uses Azure AI Services or Anthropic on Databricks as model providers for Agent mode (I suspect specific models depend on the cloud region) and Azure AI Services for chat and cell actions. Also, Databricks has this neat workspace option Enforce data processing within workspace Geography for Designated Services which prevents Genie Code from processing data with models served outside of the workspace geographical region.
💡 File attachment. Both tools allow you to attach files to your prompts. However, Genie Code supports only image files while in Cortex Code you can attach any file (including CSV or Excel files which you can then upload to your tables).
💡Context. Both tools allow you to set the context. Cortex Code keeps the context of currently open file (e.g. SQL script) plus you can use the @ prefix to refer to specific resources (databases, schemas, tables, etc.) or the #. Genie Code lets you set the context to any of your files in the Databricks workspace and - just like in CoCo - refer to Unity Catalog resources using the @ prefix. Both assistants in web UI know the context of the UI itself.
💡 Platform awareness. Both assistants are aware of their platforms - catalogs, capabilities, specific features.
💡 Asking before acting. Both tools ask the user for permission before executing any code. And they do it for a reason - the code can contain commands or queries you don't want to run in your environment (e.g. switching to too privileged role in Snowflake). The user (that's you!) is ultimately responsible for taking any action that interferes with the platform. So, general recommendation: always read AI-generated code BEFORE running it.
💡 Monitoring. Both platforms provide ways to monitor usage of their AI assistants. Databricks provides a dashboard which can be used to track Genie Code usage in the organization. Snowflake provides two account usage views allowing usage and cost tracking.
💡 Almost any geography. Both assistants can run even if models supporting their work are not available in specific cloud region. For environments in almost any cloud region (there are some exceptions, e.g. Qatar region in Azure for Databricks) you can enable cross-region inference (Snowflake) or cross-geo processing (Databricks). Important: before using this option make sure you can use it in your environment from regulation perspective!
💡 Genie Code seemed less predictable. Example: I submitted similar prompts three times within the same workspace (something like "Create a new catalog called test.") and got three different responses/actions: 1) catalog creation required administrative rights (this response came with the right SQL code for catalog creation) - just like Genie Code wasn't able to check my permissions, 2) catalog creation failed because this workspace requires a storage location, 3) successfully created a new catalog and its schemas (but then "cheated" on loading the local file - it detected there was a table called superstore and simply copied its content to the raw schema). An analogous task in Cortex Code executed with no problems.
💡 Databricks Free Edition mystery. In Databricks Free Edition I got an interesting response to my question why Genie Code was not able to create a new catalog for me. The documentation says nothing about Genie Code's limitations in Free Edition. Hallucination? :-)
Recommended by LinkedIn
💡 Cortex Code was more efficient in my experiments. Completing specific tasks in Snowflake using Cortex Code took noticeably less time and iterations than completing the same tasks in Databricks using Genie Code. Sometimes I had an impression Genie Code got into a loop and tried the same ineffective methods to solve the problem over and over again (e.g. when trying to create an AI/BI Dashboard, for some reason it wanted to write the code in a notebook or in Python scripts). And it's not like Cortex Code didn't make mistakes - it sometimes produced code that couldn't be run for various reasons, but it iterated and fixed problems very quickly (without asking additional questions) and the creative process continued.
💡 Cortex Code seemed less authoritative. Genie Code tended to impose functionality in situations where the choice of how to accomplish a task wasn't obvious. Example: when I asked both assistants to implement a data pipeline with daily refresh of data, Genie Code immediately went into Lakeflow Declarative Pipeline (LDP) while Cortex Code asked additional questions and then decided (correctly) to choose a mix of tasks + procedures and dynamic tables. I'm not sure if for Genie Code it was a matter of lack of skills supporting features other than LDP or "pushing" on the adoption of a specific feature.
💡 Product documentation and your knowledge matter. Both assistants reach to the documentation to search for information when needed. If the documentation contains errors, is outdated or incomplete, it may have a significant impact on the assistant's responses. And in such cases it's good to be up to date with specific features. Which leads me to the conclusion that we should...
💡 Delegate tasks, not wisdom. Although AI assistants are powerful and can help with many tasks, a human should have control over the entire process and code quality. Uncritically accepting what the assistant suggests/returns can lead to the creation of "monster solutions". Both Databricks and Snowflake put warnings in UIs of their assistants: Genie Code - "Always review the accuracy of responses.", Cortex Code - "Cortex Code can make mistakes, double-check responses".
💡 Your skills matter. Not only your product and workload knowledge matter. Your skills in working with AI tools like Claude are worth their weight in gold (prompt engineering, the 4D Framework, working with skills and subagents, etc.).
Feature strengths and weaknesses
Databricks Genie Code
✅ Strengths:
❌ Weaknesses:
Snowflake Cortex Code
✅ Strengths:
❌ Weaknesses:
Summary
Couple of thoughts to wrap up this article:
I'm curious to learn about your experiences from using AI assistants in Databricks and Snowflake. What were you able to complete? What was breaking? What was the most annoying? Where do you see the biggest wins from using AI assistants for people working with both platforms? Thank you in advance for sharing your thoughts!
Real talk. 🎯 Code assistants are not about replacing devs. They are about compressing the feedback loop between idea and working prototype. We cut query dev time 40 percent by treating AI suggestions like pair programming. Review, test, deploy. Neither tool wins on paper. They win on workflow fit. Great breakdown. #DataAI #BuildSmart
thanks, great article!!! Getting more and more experienced with Genie Spaces and Genie Code. Especially the Agent function in the Genie Spaces is marvelous. If you ask what can i analyze with this genie space it gives back a wealth of information which i then use to further work out either within the genie itself or the general genie code. An example: i made a genie space with the Procure to Pay (P2P) curated silver or gold tables (pr/po/ gr/ir…) and then ask what can i analyze with this space… which kpi’s , variance analysis, dq measures, wow the feeback you get is mindblowing…
Pedro Mauricio Esparza García Ricardo González Eduardo Martinez Orozco Margarita Ocampo Avelar
Interesting comparison, but the real question nobody's asking: who owns the auth layer? Genie and Cortex both inherit their platform's identity model, which sounds convenient until... Unified governance is only as strong as its weakest auth handoff. The 'best AI assistant' crown should go to whichever one makes the least assumptions about who's asking :)
Hi Pawel Potasinski! Thank you for a very detailed and informative side-by-side analysis. One thing I noticed in my experience was that Genie Code lacked a bit in platform awareness. When I was building an ML pipeline, it required troubleshooting the OOM Spark driver issue that had nothing to do with the task at hand. It should have known better than to pull unsampled data into driver memory and defaulted to distributed machine learning (like Spark MLlib or XGBoost on Spark) or applied a .sample method. (Which it did eventually.)