Skedulo API Usage Best Practices

Best practices to ensure efficient and performant interactions with the Skedulo API

Handling large data sets

Implement pagination or chunking. When working with large data sets, avoid making single, large requests. Instead:

  • Split data retrieval into smaller, independent requests executed sequentially.
  • Use logical batch sizes. A range of 100-500 records per request is generally recommended, but adjust based on the payload size of your requests.
  • Experimentation may be needed to determine the optimal batch size. Monitor API request payloads to understand their size and adjust accordingly.

Ensuring responsiveness and reliability

Monitor response times and handle partial failures:

  • Continuously monitor API response times. If responses consistently approach 30 seconds, consider breaking requests into even smaller chunks.
  • Implement robust error handling to gracefully manage partial failures and retry mechanisms where appropriate.

Managing request volumes

Rate limiting and request throttling:

  • Implement rate limiting to prevent overwhelming the Skedulo platform. A starting point of 10-50 requests per second (RPS) is a reasonable consideration, but adjust based on specific API usage patterns and any rate limits enforced by Skedulo.
  • If requests are processed sequentially with a deliberate pause (sleep) between them, explicit rate limiting might be less critical, but still worth considering for overall system stability.

Optimizing data transmission

Validate and optimize data before sending:

  • Before sending data to the API, validate it thoroughly.
  • Strip unnecessary fields, such as non-required null or empty strings, to reduce payload size.
  • Sanitize input data to prevent potential issues and ensure data integrity. Reducing payload size improves processing time and overall API performance.