Hi GB! We’ve been running GB on K8s and recently, ...
# ask-questions
p
Hi GB! We’ve been running GB on K8s and recently, we’ve started getting errors along the lines of
Error connecting to the GrowthBook API at <https://api.growthbook>.[...].com
with the
Failed to fetch
error. When we try to manually request the API, we get the following
Copy code
❯ curl <https://api.growthbook>.[...].com
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>
Looking at the logs, it seems like the main application pod encounters the following error:
Copy code
yarn run v1.22.19
$ wsrun -p '*-end' -m start
back-end
 | $ node dist/server.js
front-end
 | $ next start
 | ready - started server on 0.0.0.0:3000, url: <http://localhost:3000>
back-end
 |   Back-end is running at <http://localhost:3100> in production mode
 |   Press CTRL-C to stop
 |
 | MongooseServerSelectionError: connection timed out
 |     at NativeConnection.Connection.openUri (/usr/local/src/app/node_modules/mongoose/lib/connection.js:847:32)
 |     at /usr/local/src/app/node_modules/mongoose/lib/index.js:351:10
 |     at /usr/local/src/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:32:5
 |     at Promise._execute (/usr/local/src/app/node_modules/bluebird/js/release/debuggability.js:384:9)
 |     at Promise._resolveFromExecutor (/usr/local/src/app/node_modules/bluebird/js/release/promise.js:518:18)
 |     at new Promise (/usr/local/src/app/node_modules/bluebird/js/release/promise.js:103:10)
 |     at promiseOrCallback (/usr/local/src/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:10)
 |     at Mongoose._promiseOrCallback (/usr/local/src/app/node_modules/mongoose/lib/index.js:1149:10)
 |     at Mongoose.connect (/usr/local/src/app/node_modules/mongoose/lib/index.js:350:20)
 |     at /usr/local/src/app/packages/back-end/dist/init/mongo.js:26:41
 |     at Generator.next (<anonymous>)
 |     at /usr/local/src/app/packages/back-end/dist/init/mongo.js:8:71
 |     at new Promise (<anonymous>)
 |     at __awaiter (/usr/local/src/app/packages/back-end/dist/init/mongo.js:4:12)
 |     at exports.default (/usr/local/src/app/packages/back-end/dist/init/mongo.js:19:25)
 |     at /usr/local/src/app/packages/back-end/dist/app.js:106:43
 |     at Generator.next (<anonymous>)
 |     at /usr/local/src/app/packages/back-end/dist/app.js:31:71
 |     at new Promise (<anonymous>)
 |     at __awaiter (/usr/local/src/app/packages/back-end/dist/app.js:27:12)
 |     at /usr/local/src/app/packages/back-end/dist/app.js:105:34
 |     at /usr/local/src/app/packages/back-end/dist/app.js:108:16 {
 |   reason: TopologyDescription {
 |     type: 'Single',
 |     setName: null,
 |     maxSetVersion: null,
 |     maxElectionId: null,
 |     servers: Map(1) { 'growthbook-mongodb:27017' => [ServerDescription] },
 |     stale: false,
 |     compatible: true,
 |     compatibilityError: null,
 |     logicalSessionTimeoutMinutes: null,
 |     heartbeatFrequencyMS: 10000,
 |     localThresholdMS: 15,
 |     commonWireVersion: null
 |   }
 | }
 | Error: MongoDB connection error.
 |     at /usr/local/src/app/packages/back-end/dist/init/mongo.js:34:15
 |     at Generator.throw (<anonymous>)
 |     at rejected (/usr/local/src/app/packages/back-end/dist/init/mongo.js:6:65)
 | error Command failed with exit code 1.
 | info Visit <https://yarnpkg.com/en/docs/cli/run> for documentation about this command.
 | `yarn start` failed with exit code 1
The fix that has worked thus far is restarting the pod hosting the frontend/backend. However, we were wondering if there is a way for the system to failover. Even though ``yarn start` failed with exit code 1` , the application pod was deemed
Running
on K8s. Since both back-end and front-end are packaged into one container, what is the best way to perform a liveliness and readiness probe on the application so that it checks both the front-end and back-end system? Many thanks in advance!
f
Are you running MongoDB in a replica set with multiple members? That's the recommended approach for prod to avoid server selection errors. As for pod health checks, there are a couple options. 1. Run front-end and back-end as separate pods. Same image, but the commands would be different:
yarn workspace front-end start
and
yarn workspace back-end start
. If you do that, if the server dies, the whole pod should stop 2. The back-end has a
/healthcheck
endpoint. For the front-end, you can use the
/api/init
endpoint.
p
Currently there is only a single pod for MongoDB and a single pod for the main application. Thanks for the suggestions! Will try splitting up our deployments into separate front-end and back-end pods with individual health checks.
168 Views