Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • B bull
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 175
    • Issues 175
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 9
    • Merge requests 9
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • OptimalBits
  • bull
  • Issues
  • #2530
Closed
Open
Issue created Jan 26, 2023 by Oliver Weng@OliverwengFiltered

read ECONNRESET error occasionally

starting a few weeks ago, my application starts to generate those logs occationally (asyncTechQUploadQueue is the queue I have in my code which has the error handler binded.):

Error: read ECONNRESET at TCP.onStreamRead (node:internal/stream_base_commons:217:20)

Error: asyncTechQUploadQueue job error occured at Queue. (/app/bullJobs/worker.js:72:23) at Queue.emit (node:events:525:35) at EventEmitter.emit (node:events:513:28) at EventEmitter.silentEmit (/app/node_modules/ioredis/built/Redis.js:460:30) at Socket. (/app/node_modules/ioredis/built/redis/event_handler.js:189:14) at Object.onceWrapper (node:events:628:26) at Socket.emit (node:events:525:35) at emitErrorNT (node:internal/streams/destroy:151:8) at emitErrorCloseNT (node:internal/streams/destroy:116:3) at process.processTicksAndRejections (node:internal/process/task_queues:82:21)

It impacts the application's availability a bit.

my diagnose is that the redis connection got reset for some reason and then it auto reconnects and resumes the pending job. but during the process, the application will be a bit of weird state.

I'm wondering how to avoid the read ECONNRESET, and why it all of sudden starts to happen.

I'm using GCP cloud run for hosting the bull service, and using redislabs.com for redis instance

I appreciate any insights! Thanks!

cc @manast

Assignee
Assign to
Time tracking