✅ Source ✅ Destination (coming soon)
This article explains how you can now transfer data at scale between your database or data warehouse (Redshift, Azure.....) and MadKudu through Amazon S3. To send data from BigQuery or Snowflake, please prefer using our direct BigQuery and Snowflake integration.
What does this integration do?
Amazon Simple Storage Service (Amazon S3) is a service that allows you to store and exchange data in a highly scalable, reliable, fast, and inexpensive way. Learn more.
Source: Send MadKudu your data sitting in your Data Warehouse or other integrations not connected to MadKudu, such as your web traffic or in-app data, so that you can use it in models and segmentations.
Destination (coming soon): Send the results of your MadKudu models and segmentations as a file in an S3 bucket to import back into your Data Warehouse.
Use cases
Send behavioral data to MadKudu on a daily basis to use in your activity feed or for an engagement score .
Score contacts on a daily basis in your data warehouse to surface qualified and engaged users.
Send a list of emails or domains as a one-time scoring to surface qualified prospects to focus on.
How to set it up?
How to send data from S3 to MadKudu
Please follow these steps to enable Amazon S3 as a source:
Let us know when you have added a file to your bucket and we'll validate if MadKudu can pull it.
Note:
All the columns you plan to send to MadKudu on a recurring basis need to be present in this test file.
Set up a recurring dump (at least daily) of your fresh data to S3 and give us a green light to activate the recurring pull of your data.
Once this is setup and MadKudu has access to your data, you'll be able to:
Map the data pulled in MadKudu platform in the Event Mapping.
Start building an Engagement Score or Aggregations based on your behavioral data.
FAQ
What CSV parameters are sent to Amazon S3 when it is enabled as a Destination for MadKudu?
delimiter: quote
row delimiter: \n (line break)
quotes: around multiline elements only (signals)