The AI Assistant Showdown: Databricks Genie Code vs Snowflake Cortex Code
Image generated with Nano Banana Pro

The AI Assistant Showdown: Databricks Genie Code vs Snowflake Cortex Code

Time to read: 20 minutes

Introduction

What interesting times to live in! Now we can vibe-code data & AI solutions in leading data platforms - Databricks and Snowflake! Recently both vendors introduced their AI assistants - Databricks Genie Code and Snowflake Cortex Code (CoCo). Well, not exactly that - Genie Code is a successor (rebranding?) of previously introduced Databricks Assistant. Still, both assistants are new, hot and heavily tested by customers, partners and the community, so I couldn't miss the opportunity to run some experiments and see how both assistants can help in performing different tasks on both platforms.

In this article I'll share my observations and findings from these experiments.

Experiments

Some time ago I wrote about projects for which I used Cortex Code. My vibe-coding experiments aimed at both Snowflake and Databricks included:

  • Simple catalog tasks - like creation of the new catalog/database or schema.
  • Complex data engineering tasks - like creation of an end-to-end data pipeline implementing the medallion architecture or adding data quality mechanisms to existing data pipelines.
  • BI tasks - like creation of a dashboarding solution (with semantic modeling) based on ready-to-use data mart.
  • FinOps tasks - like cost optimization frameworks and monitoring tools (alerts, reports).
  • Platform setup and administration task - like setting up the platform for regulated customers.

Notes from the field

Here are my observations:

💡 Start. Both tools are out-of-the-box available in web UI, so it's very easy to start using them. Just click one icon in the UI and you can start chatting with the assistant.

💡 Access control. While getting started with both tools is easy, disabling either one is a different matter (not every enterprise organization will want to have AI tools enabled for their users). Cortex Code can be disabled for specific users using RBAC (simply revoke specific database roles from user's roles). For Genie Code there is no straightforward way to disable it for specific users. The only option seems to be disabling the Partner-powered AI features option on the account or workspace level which is not dedicated just to Genie Code. Both tools work in the current user's context when it comes to performing operations on the platform.

💡 Command line. In addition to Cortex Code in Snowsight (web UI), Snowflake offers Cortex Code CLI - a command line interface that runs in a local shell. The CLI gives opens many ways of working with Cortex Code - local development, IDE integration (e.g. with VS Code or Cursor), local file system access (e.g. for uploading files to Snowflake stages or using your own libraries of skills), cross-system pipeline integration (support for dbt and Airflow), git integration, subagents working on a project in parallel, hooks (intercept and customize CoCo’s behavior at key lifecycle points), model choice, working on different Snowflake accounts (you can even select two different accounts for a single CoCo CLI session context - one for CoCo inference, one for performing SQL tasks). As a result, with CoCo CLI you can implement much more complex projects (e.g. recently I played with preparing new Snowflake accounts for a green field customer from regulated industry - the result of the project was a set of parametrized Terraform and SQL scripts containing a full account setup including integrations with 3rd party services), also using tools and platforms other than Snowflake. Note: to work with Cortex Code CLI you need either commercial Snowflake account or a dedicated Cortex Code CLI Trial account (warning - credit card required!). Genie Code does not provide a CLI which I find a bit disappointing (the closest to CoCo CLI seems to be the Databricks AI Dev Kit but it requires you to use 3rd party tools like Claude Code or Cursor).

💡 Customization. Both tools offer customization. Genie Code allows adding MCP servers, user and workspace instructions, user and workspace skills and Serverless Usage Policy (usage tagging) - all in one Settings pane. Cortex Code in Snowsight allows adding user instructions (AGENTS.md file in user's workspace) and personal (user's) skills. Cortex Code CLI allows adding skills, subagents, hooks and MCP servers.

💡 Assistant modes. Genie Code works in one of two modes: 1) Agent (default and recommended) which can automate multi-step workflows, plan a solution, retrieve relevant assets, run code, use cell outputs to improve results, fix errors automatically, and more, 2) Chat which answers questions and generate code within the chat (which can be run by the user). Cortex Code also works in two modes: 1) Execution - analogous to Agent mode in Genie Code, 2) Plan - in this mode CoCo takes some time to prepare a comprehensive step-by-step plan before taking actions.

💡 LLM models. Cortex Code allows user to choose the LLM model to work with (as I'm writing this - different Claude models + OpenAI GPT 5.2). Unfortunately, most of these models are available only in AWS which means you may need to enable cross-region inference to have them available for your Snowflake account located in other clouds (typically, setting the CORTEX_ENABLED_CROSS_REGION option to AWS_EU does the thing). Genie Code uses Azure AI Services or Anthropic on Databricks as model providers for Agent mode (I suspect specific models depend on the cloud region) and Azure AI Services for chat and cell actions. Also, Databricks has this neat workspace option Enforce data processing within workspace Geography for Designated Services which prevents Genie Code from processing data with models served outside of the workspace geographical region.

💡 File attachment. Both tools allow you to attach files to your prompts. However, Genie Code supports only image files while in Cortex Code you can attach any file (including CSV or Excel files which you can then upload to your tables).

💡Context. Both tools allow you to set the context. Cortex Code keeps the context of currently open file (e.g. SQL script) plus you can use the @ prefix to refer to specific resources (databases, schemas, tables, etc.) or the #. Genie Code lets you set the context to any of your files in the Databricks workspace and - just like in CoCo - refer to Unity Catalog resources using the @ prefix. Both assistants in web UI know the context of the UI itself.

💡 Platform awareness. Both assistants are aware of their platforms - catalogs, capabilities, specific features.

💡 Asking before acting. Both tools ask the user for permission before executing any code. And they do it for a reason - the code can contain commands or queries you don't want to run in your environment (e.g. switching to too privileged role in Snowflake). The user (that's you!) is ultimately responsible for taking any action that interferes with the platform. So, general recommendation: always read AI-generated code BEFORE running it.

💡 Monitoring. Both platforms provide ways to monitor usage of their AI assistants. Databricks provides a dashboard which can be used to track Genie Code usage in the organization. Snowflake provides two account usage views allowing usage and cost tracking.

💡 Almost any geography. Both assistants can run even if models supporting their work are not available in specific cloud region. For environments in almost any cloud region (there are some exceptions, e.g. Qatar region in Azure for Databricks) you can enable cross-region inference (Snowflake) or cross-geo processing (Databricks). Important: before using this option make sure you can use it in your environment from regulation perspective!

💡 Genie Code seemed less predictable. Example: I submitted similar prompts three times within the same workspace (something like "Create a new catalog called test.") and got three different responses/actions: 1) catalog creation required administrative rights (this response came with the right SQL code for catalog creation) - just like Genie Code wasn't able to check my permissions, 2) catalog creation failed because this workspace requires a storage location, 3) successfully created a new catalog and its schemas (but then "cheated" on loading the local file - it detected there was a table called superstore and simply copied its content to the raw schema). An analogous task in Cortex Code executed with no problems.

Article content
Nondeterministic behavior of Genie Code

💡 Databricks Free Edition mystery. In Databricks Free Edition I got an interesting response to my question why Genie Code was not able to create a new catalog for me. The documentation says nothing about Genie Code's limitations in Free Edition. Hallucination? :-)

Article content
Genie Code's mysterious response on Databricks Free Edition

💡 Cortex Code was more efficient in my experiments. Completing specific tasks in Snowflake using Cortex Code took noticeably less time and iterations than completing the same tasks in Databricks using Genie Code. Sometimes I had an impression Genie Code got into a loop and tried the same ineffective methods to solve the problem over and over again (e.g. when trying to create an AI/BI Dashboard, for some reason it wanted to write the code in a notebook or in Python scripts). And it's not like Cortex Code didn't make mistakes - it sometimes produced code that couldn't be run for various reasons, but it iterated and fixed problems very quickly (without asking additional questions) and the creative process continued.

💡 Cortex Code seemed less authoritative. Genie Code tended to impose functionality in situations where the choice of how to accomplish a task wasn't obvious. Example: when I asked both assistants to implement a data pipeline with daily refresh of data, Genie Code immediately went into Lakeflow Declarative Pipeline (LDP) while Cortex Code asked additional questions and then decided (correctly) to choose a mix of tasks + procedures and dynamic tables. I'm not sure if for Genie Code it was a matter of lack of skills supporting features other than LDP or "pushing" on the adoption of a specific feature.

💡 Product documentation and your knowledge matter. Both assistants reach to the documentation to search for information when needed. If the documentation contains errors, is outdated or incomplete, it may have a significant impact on the assistant's responses. And in such cases it's good to be up to date with specific features. Which leads me to the conclusion that we should...

💡 Delegate tasks, not wisdom. Although AI assistants are powerful and can help with many tasks, a human should have control over the entire process and code quality. Uncritically accepting what the assistant suggests/returns can lead to the creation of "monster solutions". Both Databricks and Snowflake put warnings in UIs of their assistants: Genie Code - "Always review the accuracy of responses.", Cortex Code - "Cortex Code can make mistakes, double-check responses".

💡 Your skills matter. Not only your product and workload knowledge matter. Your skills in working with AI tools like Claude are worth their weight in gold (prompt engineering, the 4D Framework, working with skills and subagents, etc.).

Feature strengths and weaknesses

Databricks Genie Code

✅ Strengths:

  • Easy to start - Genie Code is built into the Databricks web UI, just click one button and you're ready to go.
  • Platform awareness - Genie Code is aware of Databricks features and can help applying the right features to specific problems.
  • Customization in one place - a single Settings pane allows you to manage skills, instructions, and MCP servers, as well as assign a Serverless Usage Policy.
  • Pricing - all current Genie Code capabilities are available at no additional cost (read - no token or inference cost). Users pay only for the compute that they use to run their notebooks, queries, jobs, and so on. The question is, how long will Databricks maintain this pricing?
  • Integration with Unity Catalog - this integration ensures that the agent follows the same security and governance rules as the rest of the Databricks platform.
  • Data processing in EU - model providers for Genie Code are available in EU and the Enforce data processing within workspace Geography for Designated Services option keeps data processing within Databricks workspace geography.

❌ Weaknesses:

  • Access control - no option to disable Genie Code for specific users.
  • Limited file attachment - you can attach only images to your prompts in Genie Code.
  • No CLI - Genie Code works only in web UI of Databricks.
  • Lack of skill transparency - I was not able to get a full list of built-in skills built into Genie Code.
  • Efficiency requires some improvement - cases I mentioned about (nondeterministic behaviors, weird loops) slow down user's work with Genie Code.
  • Reliability - several times I got some unhandled exceptions (see the image below). I would expect a feature announced as GA to be free from surprises like that.

Article content
Unhandled exception in Genie Code

Snowflake Cortex Code

✅ Strengths:

  • Easy to start - Cortex Code is built into the Snowflake web UI (Snowsight), just click one button and you're ready to go.
  • Platform awareness - Cortex Code is aware of Snowflake features and can help applying the right features to specific problems.
  • Access control - you can control which users/roles have access to Cortex Code.
  • Cortex Code CLI - command line interface enables tons of different ways of working with Cortex Code and 3rd party platforms (dbt, Airflow). I consider it as one the major competitive advantages of CoCo.
  • Out-of-the-box skills + extensibility + skill transparency - Cortex Code comes with a great number of ready-to-use specialized agentic skills (in CLI it's almost 40 skills), e.g. machine-learning, data-governance, cost-intelligence, data-quality. And you can easily create your own custom skills (you can even ask CoCo to create them for you). What's important, Snowflake is fully transparent when it comes to skills - all skills can be listed in CoCo using the /skills command.

Article content
Available skills listed in Cortex Code CLI

  • File attachment - you can easily add files of any format to your prompts.
  • Cost and usage tracking and control - Snowflake provides ways to track (account usage views for usage history of CoCo in CLI and Snowsight tracking all usage aspects - credits, tokens, users, LLM models, cache efficiency) and control (daily limits per user for CoCo in Snowsight and CLI) usage and cost of Cortex Code.
  • Pricing - with the latest update to Snowflake Service Consumption Table and introduction of AI Credits the price of Cortex Code got significantly lower than it used to be.
  • Integration with Snowflake Horizon catalog - this integration ensures that the agent follows the same security and governance rules as the rest of the Snowflake platform.

❌ Weaknesses:

  • Model availability - if you have your Snowflake account on Azure or GCP it's almost certain you will have to enable cross-region inference to use Cortex Code.
  • Cortex Code CLI not supported by classic Trial - you have to use commercial Snowflake account or a dedicated CoCo CLI Trial account (only $40 to use with CoCo CLI, requires your credit card) to work with CoCo CLI.
  • Losing the UI context - occasionally Cortex Code in Snowsight asked me to switch to Workspaces in order to continue code editing. I would expect AI assistant to always be able to navigate in the UI on its own.

Article content
Cortex Code asking the user to switch to the Workspaces view

Summary

Couple of thoughts to wrap up this article:

  • Vibe-coding is real but so is the learning curve. Both Cortex Code and Genie Code genuinely lower the barrier to building data solutions on their respective platforms. Yet your ability to get the most out of them scales directly with your existing knowledge - of the platform, of the workloads, and of working effectively with AI tools in general. In other words: the better in your role of data analyst, engineer or architect you are, the better vibe-coder you'll be.
  • It's not just about vibe-coding. Both assistants can do a lot more - for example assist you in ideation and building checklists or guidance for complex processes like environment setup, building generic frameworks or planning platform migration. Do not treat Genie Code and Cortex Code as just quick-handed developers.
  • Delegate tasks, not wisdom. Both vendors put explicit warnings in their UIs reminding users to double-check AI-generated content and code - and they're right to do so. Uncritically accepting what an AI assistant produces is a fast track to unmaintainable, inefficient, or outright broken solutions (or building technical debt). Use their speed but verify the products of their work. Read - have a robust verification process in place if you consider going with AI-generated code to production. And always keep human in charge.
  • CLI is a game changer - if you can use it. Cortex Code CLI opens up a completely different league of use cases: Terraform automation, dbt and Airflow integration, multi-agent projects, data-driven apps, local file system access, etc. For now, Genie Code has no equivalent, which is a meaningful gap for power users and complex enterprise scenarios.
  • Pricing models will evolve - plan accordingly. Genie Code's current zero-inference-cost model is compelling but it's worth asking how sustainable it is long-term. Snowflake's AI Credits approach seems to be more transparent and arguably more enterprise-ready today. Either way, cost tracking and governance for AI assistants in data platforms is still maturing - and your finance team will eventually start asking questions.
  • Reliability and predictability matter in enterprise. Non-deterministic behaviors (and no, it's not about LLMs being nondeterministic), unhandled exceptions, and looping agents are tolerable in a sandbox or in Preview stage but not in serious projects for enterprise customers.
  • Same old foundations, higher stakes. Permissions, data quality, governance, security, even data modeling - all the unglamorous fundamentals you've been working on for years suddenly become painfully visible when an AI assistant starts (almost) autonomously executing SQL or Python code in your environment. Consider these tools as a stress test for your platform maturity, not a shortcut around it.
  • The best AI assistant is the one that fits your platform strategy. If you're Snowflake-first, Cortex Code - especially with CLI - is a strong, mature choice. If you're Databricks-first, Genie Code offers a frictionless start and attractive pricing, with some room to grow. Neither tool should drive your platform decision. But if you're still sitting on the fence, their AI assistants are actually a pretty interesting lens through which you can compare the two platforms.

I'm curious to learn about your experiences from using AI assistants in Databricks and Snowflake. What were you able to complete? What was breaking? What was the most annoying? Where do you see the biggest wins from using AI assistants for people working with both platforms? Thank you in advance for sharing your thoughts!


Real talk. 🎯 Code assistants are not about replacing devs. They are about compressing the feedback loop between idea and working prototype. We cut query dev time 40 percent by treating AI suggestions like pair programming. Review, test, deploy. Neither tool wins on paper. They win on workflow fit. Great breakdown. #DataAI #BuildSmart

Like
Reply

thanks, great article!!! Getting more and more experienced with Genie Spaces and Genie Code. Especially the Agent function in the Genie Spaces is marvelous. If you ask what can i analyze with this genie space it gives back a wealth of information which i then use to further work out either within the genie itself or the general genie code. An example: i made a genie space with the Procure to Pay (P2P) curated silver or gold tables (pr/po/ gr/ir…) and then ask what can i analyze with this space… which kpi’s , variance analysis, dq measures, wow the feeback you get is mindblowing…

Interesting comparison, but the real question nobody's asking: who owns the auth layer? Genie and Cortex both inherit their platform's identity model, which sounds convenient until... Unified governance is only as strong as its weakest auth handoff. The 'best AI assistant' crown should go to whichever one makes the least assumptions about who's asking :)

Hi Pawel Potasinski! Thank you for a very detailed and informative side-by-side analysis. One thing I noticed in my experience was that Genie Code lacked a bit in platform awareness. When I was building an ML pipeline, it required troubleshooting the OOM Spark driver issue that had nothing to do with the task at hand. It should have known better than to pull unsampled data into driver memory and defaulted to distributed machine learning (like Spark MLlib or XGBoost on Spark) or applied a .sample method. (Which it did eventually.)

To view or add a comment, sign in

More articles by Pawel Potasinski

  • Data Platform News (March 2026)

    Time to read: 19 minutes Oh, boy! What a month it was! I was busy as hell at work and at the same time an insane amount…

    10 Comments
  • Data Platform News (February 2026)

    Time to read: 16 minutes This year's February was probably one of the coldest months in my life (down to -25 degrees…

    2 Comments
  • Dashboards are dead! Killed by agents.

    Time to read: 12 minutes Introduction 🔥 Hot news! Dashboards are dead! Killed by "conversational BI" agents! 🔥 Well…

    57 Comments
  • Data Platform News (January 2026)

    Time to read: 12 minutes Hello data people! First, let me apologize those of you who missed the December 2025 edition…

    17 Comments
  • Data Platform News (December 2025)

    Time to read: 12 minutes Hello data people! December was a quiet month for me as I intentionally took a longer break…

    4 Comments
  • Data Platform News (November 2025)

    Brrrr..

    4 Comments
  • My First Impression on Snowflake Interactive Tables & Warehouses

    Why This Test? Recently announced in Public Preview, Snowflake's new interactive warehouses and tables are designed to…

    18 Comments
  • Data Platform News (October 2025)

    Autumn..

    2 Comments
  • Data Platform News (September 2025)

    Greetings from Berlin data people! I'm compiling this edition of my monthly newsletter while attending Snowflake…

    1 Comment
  • Data Platform News (August 2025)

    The end of August reminds me of the end of summer holidays. But there are no holidays in data platform development.

    4 Comments

Others also viewed

Explore content categories