If your data is hosted in Amazon Redshift, you can export it directly to a S3 bucket.
This bucket can be:

Note: If you're looking to export data for a Segment Warehouse in Redshift, please follow these instructions instead.

Exporting your Redshift data to S3

With Redshift, you can UNLOAD data directly to a S3 bucket. For the file format, please follow the same spec as our flat file import.

Track

Once you've mapped the data into the expected format, you can export it to S3 with a command like thisL

UNLOAD ("SELECT event_text, event_timestamp, contact_key, event_attribute_1, event_attribute_2 FROM [your_table]")
TO 's3://madkudu-data-in-[your_org_id]/track/track.csv'
CREDENTIALS 'aws_access_key_id=[your_access_key];aws_secret_access_key=[your_secret_key]'
HEADER
DELIMITER '~'
ADDQUOTES
ESCAPE
MANIFEST
GZIP
ALLOWOVERWRITE
PARALLEL ON
;

Where:

  • The SELECT section is modify to match your own data structure.
  • The credentials are either yours (hosted on your side) or a set of credentials we'll have shared with you (if hosted on our side)
  • The rest of ┬áparameters (everything after HEADER) is unchanged. We found that those tend to create the less corruption issues.

NOTE: with this settings, every time you execute this command, the existing files will be overwritten.

Identify

UNLOAD ("SELECT contact_key, email, attribute_1, attribute_2 FROM [your_table]")
TO 's3://madkudu-data-in-[your_org_id]/identify/identify.csv'
CREDENTIALS 'aws_access_key_id=[your_access_key];aws_secret_access_key=[your_secret_key]'
DELIMITER '~'
ADDQUOTES
ESCAPE
MANIFEST
GZIP
ALLOWOVERWRITE
PARALLEL ON
;

Group

UNLOAD ("SELECT contact_key, account_key, account_attribute_1, account_attribute_1 FROM [your_table]")
TO 's3://madkudu-data-in-[your_org_id]/group/group.csv'
CREDENTIALS 'aws_access_key_id=[your_access_key];aws_secret_access_key=[your_secret_key]'
DELIMITER '~'
ADDQUOTES
ESCAPE
MANIFEST
GZIP
ALLOWOVERWRITE
PARALLEL ON
;

Did this answer your question?