This article is for the man who try to read the source code.
I have to make a lot of memos when I read the large percent of the source code.
This is the summary of it.
Thread name will appear in the log.
Tue Feb 5 19:15:33.544 [rsBackgroundSync] replSet syncing to: 192.168.159.134:27017
The part of [rsBackgroundSync] stands for the thread name.
This thread name will be changed by its phase.
- //github.com/mongodb/mongo/blob/r2.3.2/src/mongo/db/db.cpp#L740">(noname): At the first
- //github.com/mongodb/mongo/blob/r2.3.2/src/mongo/db/db.cpp#L568">initandlisten: The initial phase of mongod. See the mongo-startup sequence diagram.
- //github.com/mongodb/mongo/blob/r2.3.2/src/mongo/util/net/listen.cpp#L208">conn(class MyMessageHandler): The service phase of mongod. Server sock thread.
Unique signal handler in this process.
This is common sense.
Call msync() to flush MMAP as the datafiles.
- [--syncdelay]*1000 (60*1000 is default , 0 is never)
You can change "syncdelay" parameter online.
But I think, This interval of msync() is too long to work.
The kernel will sync mre quickly automatically.
So this can be useless thread !
Write to JOURNAL and DATAFILE (group commit feature)
- (journalCommitInterval/3) + 1
This thread will sleep 1 / 3 of journalCommitInterval to check limit of uncommitted bytes.
This is the reason of the "--journalCommitInterval" ranging from 2 to 300
I had wondered it why start from 2 until know it. (^^
- Write to journal
- notify commited to getlasterror, awaitCommit() with "j" option at db/dbcommand.cpp
- write to data files
indexRebuilder : (since 2.4)
Try to repair the halfway index when startup.
- die after work
When startup, It may detect the crashed halfway building index.
Then this thread retry to build index. ( it'll also obey the "--noIndexBuildRetry" option )
Logging thread ?
cpu: elapsed: 4000 writelock: 0%"
This thread seems like it never do important things.
Report warnings and correct stale cursors.
Output warnings if the number of cursor is more than 100000.
warning number of open cursors is very large: ??
Correct timeouted cursors with...
killing old cursor [id] [ns] idle: ??ms
- timeout of cursor
- 600000 msec
Run the regular tasks.
Bellow tasks are run in mongos and mongo-client.
- Cleaner (writeback query cleaner)
- DBConnectionPool (staled connection cleaner)
TTLMonitor : (since 2.2)
Correct expired documents.
Merely starts some threads are required for replica set.
- die after work
Send hartbeat message to other mongod.
- Heartbeat request ( & get response)
- Send "update heartbeat" message to rsMgr
- Send "check new state" message to rsMgr
rsMgr on on task::Server
Async messaging framework.
- by mutex cond wait
Run lambda function (message) is pushd by someone.
It seems like used by rsHealthPoll mainly.
It seems like work following threads
But these are too complicated to understand precisely for me...
First of all, Sync task is for slave, so do nothing when primary.
- Do initial sync at the first time.
- Enter the loop
- Do sync data from OPQueue is inner oplog queue.
- Pop oplogs from OPQueue following to replBatchLimitBytes while considering --slaveDelay.
- Multi apply to me.
- Write lastOp.
- Notify to rsbackgroundsync and rsSyncNotifier.
Read oplog from foreign oplog and queuing to OPQueue.
- no wait
- Determin _currentSyncTarget (by getOplogReader).
- Read oplog from network (by OplogReader).
- Push it to OPQueue
I could not understand what is the role of this...
- no wait
- Cond wait (will notify from rsSync)
- _oplogMarker.more() : It used to compare with cursor of rsBackgroundSync. Maybe,,,
I'm feeling this thread do logging only eventually...but I also think it's so complicated for logging.
I could not understand the role of the oplogMarker cursor.
I also could not understand precisely...
- no wait
GhostSync::percolate() would be called when a (ghost) slave connected to sync from me in spite of I'm now slave.
percolate() would compare my sync target from ghost slave.
But... I could not found the way of how to avoid cyclic sync.
Mongod will create thread for each socket client.