Snowflake Native Apps Best Practices#

Warehouse Size#

For optimal performance, use a dedicated warehouse for Native Apps. A warehouse size of X-Small is typically sufficient.

Snowflake SQL Scripting#

General information about Snowflake SQL Scripting can be found in the Snowflake documentation at Snowflake SQL Scripting.

Batch Processing#

Native Apps are especially designed to process data in batches. We recommend using the following approach to handle large datasets:

  • Unless you have a specific requirement, we recommend using the default input and output fields. A smaller number of output fields can improve performance.

  • RecordID is Required for all input records and must be Unique. If you wish to use the same output table for multiple runs, make sure RecordID is unique across all runs.

  • To minimize the risk of duplicate requests caused by server outages or unknown issues during processing, we recommend the following approach:

    • Enable event sharing for an app to monitor the process.

    • Pre-process the input source to make sure all RecordID are unique, make a clean copy of it. Split large datasets into smaller batches.

    • Process each batch separately. Avoid unnecessary loops in your script.

    • Combine the results if necessary.

    • Set DUPLICATE_CHECK = TRUE (if available) to catch any duplicate RecordIDs.

      Note

      Enabling DUPLICATE_CHECK may increase runtime, especially for large datasets. We recommend testing with and without this option to determine the best approach for your use case.

License Key as Input#

You can store your License Key in a secure location and reference it in your SQL scripts using the SELECT statement.

USE MELISSA_NATIVE_APP.CORE;

CALL STORED_PROCEDURE_NAME(
  LICENSE         => (SELECT LICENSE_KEY FROM <MY_SECURE_TABLE>),
  -- other parameters
);

Table Names as Input#

All input and output table names must follow Snowflake’s naming convention.

Please refer to Identifier Requirements for more details.