Skip to main content
All CollectionsErrors
Connection_to_Mongodb_Interrupted
Connection_to_Mongodb_Interrupted

Mongodb connection

Praise Magidi avatar
Written by Praise Magidi
Updated this week

Message

Connection to xxxxxx.mongodb.net:27017 interrupted due to server monitor timeout

Cause

Synatic is backed by multiple replicated databases where one acts as a primary and the others as secondaries. This helps improve system stability and reliability in case one of the servers goes down. If the primary server goes down, one of the secondaries will take its place and be promoted to the primary. When this happens there will be a temporary loss of connectivity to the database as existing connections to the old primary are terminated, which leads to the error above.

A server might go down for a variety of reasons, including but not limited to:

  • Running out of memory.

  • Maintenance/Infrastructure upgrades.

  • Other servers in the replica set cannot communicate with that server.

This error is very rare on the platform but when it does happen, there are ways to mitigate its impact.

Resolution

This is an error at the infrastructure level, and while end users cannot directly prevent it, there are ways to minimize its impact.

  • Avoid long running queries on buffers - This error comes up when an existing connection to the database is terminated abruptly. Queries that are quick will typically not be affected because once a primary goes down, new connections will be instantiated to the new primary. If a query is long running it is more likely to be terminated unexpectedly because of its longstanding connection.

  • Batch buffer records - If you are streaming a large number records out of a buffer into a flow, the time you are connected to the buffer (and thus MongoDB) is affected directly by the time your flow takes to process those records. While the rest of the flow executes, your connection to MongoDB will remain in place so that it can stream more records out as necessary. In order to minimise the time you are connected to MongoDB, batch the records using a `Batch Records` step. This allows you to retrieve all records before processing them, which means the time you are connected to MongoDB is unaffected by how long the rest of the flow takes to execute. Note that batching records will increase memory usage significantly for a large number of records, so one will have to weigh the risk of the connection getting terminated (rare), vs higher memory usage.

  • Restart the flow - If safe to do so, you can restart the flow.

Error Detail

```
Connection to xxxxxx.mongodb.net:27017 interrupted due to server monitor timeout```
Did this answer your question?