We're excited to announce that our Salesforce and HubSpot syncs are now bidirectional! Changes in Salesforce or HubSpot sync to your database in just a few seconds. With bidirectional sync, when you make changes in your database Sequin pushes those changes to the API.
At the heart of our two-way sync is column mapping. When setting up your sync, you map Salesforce and HubSpot objects and fields to tables and columns in your Postgres database. Sequin handles the translation between changes in the API and changes in your database. This mapping layer also makes your integration more robust. For example, if a custom field is renamed API-side, it won't break the code you've written database-side.
Working with APIs through your database is easy and intuitive. Because the interface is a database, it removes the cognitive overhead that comes with an HTTP API, including:
- latency (should this call be async?)
- availability (what happens if the API is down?)
- rate limits and batching requests
- caching (to mitigate one of the problems above)
For reading your API data, nothing beats SQL and your ORM. This is why we noticed most API integrations eventually become syncing problems: teams sync data from APIs into their database so they can query it without limits. When our customers read their API data, it looks like this:
select * from lead where num_of_employees between 50 and 500;
Now, we're excited to take this paradigm even further. Before making an API write, you typically have to perform a few reads to filter down to the set of records you want to apply changes to. But if all your API data is in your database, you can issue writes with ease:
update lead set account_coordinator = 'usr_8jU7hdy' where num_of_employees between 50 and 500;
Using your database doesn't make sense for every API write. Some API operations are remote procedure calls (RPCs) that aren't tied to a create, read, update, or delete (CRUD) operation, like voiding a Stripe invoice. For those operations, making an HTTP call still makes sense.
But most of the operations a company performs against an API is CRUD, and therefore a great fit for a database interface.
How it works
Just write to your database
We use Postgres trigger functions to monitor your Sequin-synced tables for changes. All creates, updates, and deletes create a log in a special table,
_sync_write_log. Sequin's sync workers use this table as a job queue. Your transformation is translated into an HTTP request and sent to the upstream API.
Our bidirectional sync works with any Postgres database and any ORM.
Persistent errors and automatic retries
We'll retry requests if we receive a transient error, like a
500. If we receive a
400 or another error indicating that the API is not going to accept the payload, we'll log that error in your database.
In addition to the error log in your database, we'll soon support Slack alerts and integrations with error monitoring tools like Sentry. This means you won't miss an API error.
Optimized against your rate limit
When you setup your sync, you specify how much of your API rate limit you want Sequin to use. Sequin will optimize its requests to use that rate limit intelligently. When syncing data from the API to your database, Sequin allocates more requests to busier endpoints (endpoints that have more frequent changes.) When syncing data from your database to the API, Sequin will batch and enqueue writes to use fewer requests.
You can make a bunch of changes to your database without needing to worry about exploding your rate limit. Sequin will always stay below the ceiling that you set.
Writes made through your database are asynchronous. While your write may pass Postgres' validation (which verifies if the fields are the right type and size), that doesn't mean the upstream API will accept it. For example, let's say a user updates their email in your system. Your email validator accepts the change and you write it to your database. But when Sequin sends the change to Salesforce, Salesforce's email validator - which is more stringent - rejects it.
This is the trade-off with asynchronous database writes: your code is not forced to deal with the error immediately. This is mostly a problem when a user is involved, and the best way to resolve the error is to surface the issue to the user for them to correct it.
We're developing a feature that will give you the ability to await a change you made in the database. After making the change, you'll invoke a function in Postgres,
await(). That function will return the results of the write request to the API. This will allow you to keep the Postgres interface for your writes, letting Sequin deal with the HTTP translation layer. You'll have a standard interface for working with the result, which is just a Postgres row. And you can take any validation error returned and propagate that up to the user.
Knowing if the API accepted your write and handling situations where it didn't is critical to any integration. We see a path to making this experience great with our bidirectional sync.
We're excited to see what you build with our bidirectional sync! If you have any questions or want to discuss your use case, don't hesitate to reach out.