Skip to main content
Version: 1.0

Batch transfer

At Fraudio, we specialize in real-time transaction fraud monitoring through our API. This integration allows you to inform us of transactions as they occur, enabling prompt fraud identification, AI adaptation for future transactions, and bolstering our merchant fraud detection capabilities.

While we strongly recommend the real-time API integration for its simplicity and effectiveness, we recognize that certain circumstances may necessitate alternative solutions. Factors such as resource constraints or a lack of technical expertise could impede real-time integration.

To accommodate these situations, we offer an alternative method: batch transfer integration. This approach entails uploading files on a scheduled basis rather than transmitting data in real-time via the API. Batch transfer integration offers increased flexibility and may better suit businesses with lower transaction volumes or those that prefer a more gradual integration process.

With this method, you can upload new files daily, which will be automatically imported into our systems. This ensures that our platform can continue to provide timely and efficient fraud detection and prevention.

Comparison: API vs. Batch Transfer​

APIBatch transfer
Large amounts of data at oncenoyes
Programming effortmediumlow
Encrypted by defaultyesyes
Transaction fraud detection periodsub-secondsdays
Merchant fraud detection periodless than a daydays
Data quality reportingAutomaticOn Request
ProtocolHTTPSS3
FormatJSONZipped CSV
AuthenticationAPI TokenKey ID + Secret
Maximal no. of scored transactions per dayno limit10.000.000 (soft limit)

We hope this helps you to choose the integration type that best suits your needs.

Uploading Files to an Amazon S3 Bucket​

To share files with Fraudio, you can upload them to our dedicated storage on Amazon S3. We chose this storage solution because it is widely used, offers easy integrations and its data governance complies with security and privacy regulations.

On request, we will share a Key ID and Secret with you that you can use to upload files to a designated folder. There are many S3 clients available for uploading: the command-line interface (CLI), various GUIs and client-side libraries for most programming languages.

Below you have a guide about how to use the CLI. Each step it is accompanied by Shell code snippets to to help you quickly integrate. Follow them to start integrating in batch with Fraudio products.

Get Fraudio Credentials​

  • First of all you will need credentials to authenticate with AWS CLI. If you have not received credentials yet, request them with Fraudio. Also ask for your CUSTOMER_NAME, which you need to be able to view and upload files.

Install AWS CLI​

Log into AWS​

aws configure
  • Authenticate with AWS CLI by using the credentials received from Fraudio, therefore set your credentials using aws configure.

List all files in your bucket​

aws s3 ls "fraudio-customer-batch-data/${CUSTOMER_NAME}/shared/"
  • List all files in your bucket folder using the provided command.

View a file from your bucket​

aws s3api get-object --bucket fraudio-customer-batch-data --key "${CUSTOMER_NAME}/shared/example.csv" /dev/stdout
  • Print to standard output content of files in your bucket folder.

Download files​

aws s3 cp "s3://fraudio-customer-batch-data/${CUSTOMER_NAME}/shared/example.csv" ./
  • Download files from your bucket folder with the provided command.

Upload files​

aws s3 cp --acl bucket-owner-full-control ./example.csv "s3://fraudio-customer-batch-data/${CUSTOMER_NAME}/shared/"
  • Upload files to your bucket folder with the provided command. This command grants ownership to Fraudio since we now carry the responsibility to manage (and expire) data in these files.

File Format and Structure for Batch Transfer Integration

To facilitate seamless integration with Fraudio, please adhere to the following guidelines for file format, structure, and data schema.

File Format​

Please send your files to Fraudio in the CSV format, conforming to the RFC-4180 specification. For verification, create a sample file (with private data removed) and test it at CSV Lint using default settings. Address any errors or warnings accordingly.

The first line should contain column headers, with subsequent lines containing data. Use a comma , as the field separator, and a newline \n to separate lines.

If free text fields in the CSV contain commas, spaces, or newlines, enclose the value in double quotes. To include a double quote within a value, escape it with another double quote (e.g., "Just say, ""hi""!").

We kindly ask you to compress files as ZIP files in order to reduce network traffic and storage requirements. This is particularly important for multiple files or files larger than 1MB. For batch integration, always compress CSV files.

Our S3 bucket enforces encryption during transfer and storage, so there is no need to encrypt files or protect them with a password.

Folder and File Structures​

We ask our customers to use the following folder structure.

  • ${CUSTOMER_NAME}/shared - For one-time uploads, such as PoR data dumps, Excel sheets, and manual data backfills. If these terms are unfamiliar, you likely do not need this folder.

  • Subpath ${CUSTOMER_NAME}/sync is intended for data that should be processed automatically at regular intervals. Files should be organized and named by their category. Files should also provide an indication whether they are a complete or partial dataset. Finally, add an increasing number that conveys the order. Some examples:

    • File ${CUSTOMER_NAME}/sync/transactions/transactions-part-000001.zip may contain the first batch of transactions. When extracted, it contains the file transactions-part-000001.csv.
    • File ${CUSTOMER_NAME}/sync/chargebacks/chargebacks-part-000005.zip may contain the fifth increment of chargebacks. The previous parts are still valid, and together all parts form a complete collection of chargebacks.
    • File ${CUSTOMER_NAME}/sync/merchant-accounts/merchant-accounts-complete-000002.zip may contain a complete collection of merchant accounts. This dataset effectively replaces any previously uploaded complete dataset of the same category, e.g. merchant-accounts-complete-000001.zip. Note that there likely is a large overlap in the data being sent with each complete dump. Unless the files are very small, we recommend using incremental files instead, e.g. merchant-accounts-part-000001.zip.
  • Subpath ${CUSTOMER_NAME}/reports is intended for output from Fraudio, for example scored transactions from the batch integration and reports of merchant-initiated fraud.

    • Files like ${CUSTOMER_NAME}/reports/payment-fraud-detection/2022-01-01_16-52-12_scored-transactions.zip contain the transactions that were received from the sync folder described above and scored by our payment fraud detection models.
    • Files like ${CUSTOMER_NAME}/reports/merchant-initiated-fraud-detection/2022-01-01_16-52-12_scored-merchant.zip contain all potentially fraudulent merchants that had transactions in the latest batch.
  • Subpath ${CUSTOMER_NAME}/failures is intended for error output from Fraudio, which is all data that could not be parsed from the sync folder described above.

    • Files like ${CUSTOMER_NAME}/failures/2022-01-01_16-52-12_failures.zip contain a reference to the location of the original data record as well as an error message explaining why the data could not be parsed.

File Categories​

Batch transfer integration accepts the same data categories as the API:

  1. Category transaction: contains acquiring transactions between the merchant and the consumer, including information about the authorization (response code, eci, etc.). This category also includes failed transactions and refunds. Files for this category should go within folder: ${CUSTOMER_NAME}/sync/transactions/.
  2. Category chargeback, providing reason codes for the chargeback and referring to the transaction by the transaction ID. Files for this category should go within folder: ${CUSTOMER_NAME}/sync/chargebacks/.
  3. Category merchant account, which provides more information about each merchant such as registration date and KYC level. You can also include this information with each transaction instead. Files for this category should go within folder: ${CUSTOMER_NAME}/sync/merchant-accounts.
  4. Category account bank transfer is related to a transfer of money from the merchant account into a bank account. We also refer to this as withdrawals, incoming or outgoing bank transfers. Files for this category should go within folder: ${CUSTOMER_NAME}/sync/account-bank-transfers.
  5. Category inter account transfer is related to the merchant transferring funds to other merchant accounts or sub-accounts. Files for this category should go within folder: ${CUSTOMER_NAME}/sync/inter-account-transfers.

Over time, these categories of data tend to grow quite large. We recommend sending data in parts, i.e. only communicate new and changed rows rather than sending the complete dataset each time.

Data Schema​

Fraudio requires an agreed-upon data schema for automatic file processing. A schema comprises field names, data types, and (for certain fields) allowed values. Batch transfer schemas are identical to those used by our API endpoints. Refer to the API endpoints section to understand how to map your data to Fraudio schemas.

If your data differs from our schema, we can collaborate on field mapping. Data types are flexible and can also be mapped by Fraudio.

Please ensure that uploaded files maintain a consistent schema over time, allowing for a stable integration and ensuring that we can provide an optimal level of service for you..