Skip to content
GitLab
Projects
Groups
Snippets
/
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Sign in
Toggle navigation
Menu
Open sidebar
eos
QuarkDB
Commits
ab45f32a
Commit
ab45f32a
authored
Jun 27, 2018
by
Georgios Bitzes
Browse files
Stall writers, in addition to readers, until leadership marker is applied
parent
0ee6b9f1
Pipeline
#427838
passed with stages
in 50 minutes and 2 seconds
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
src/raft/RaftDispatcher.cc
View file @
ab45f32a
...
...
@@ -284,30 +284,43 @@ LinkStatus RaftDispatcher::service(Connection *conn, RedisRequest &req) {
return
conn
->
moved
(
0
,
snapshot
->
leader
);
}
// read request: What happens if I was just elected as leader, but my state
// machine is behind leadershipMarker?
// What happens if I was just elected as leader, but my state machine is
// behind leadershipMarker?
//
// It means I have committed entries on the journal, which haven't been applied
// to the state machine. If I were to service a read, I'd be giving out potentially
// stale values!
//
// Ensure the state machine is all caught-up before servicing reads, in order
// to prevent a linearizability violation.
if
(
req
.
getCommandType
()
==
CommandType
::
READ
)
{
if
(
stateMachine
.
getLastApplied
()
<
snapshot
->
leadershipMarker
)
{
// Stall client request until state machine is caught-up, or we lose leadership
while
(
!
stateMachine
.
waitUntilTargetLastApplied
(
snapshot
->
leadershipMarker
,
std
::
chrono
::
milliseconds
(
500
)))
{
if
(
snapshot
->
term
!=
state
.
getCurrentTerm
())
{
// Ouch, we're no longer a leader.. start from scratch
return
this
->
service
(
conn
,
req
);
}
//
// But we do the same thing for writes:
// - Ensures a leader is stable before actually inserting writes into the
// journal.
// - Ensures no race conditions exist between committing the leadership marker
// (which causes a hard-synchronization of the dynamic clock to the static
// one), and the time we service lease requests.
//
// This adds some latency to writes right after a leader is elected, as we
// need some extra roundtrips to commit the leadership marker. But since
// leaders usually last weeks, who cares.
if
(
stateMachine
.
getLastApplied
()
<
snapshot
->
leadershipMarker
)
{
// Stall client request until state machine is caught-up, or we lose leadership
while
(
!
stateMachine
.
waitUntilTargetLastApplied
(
snapshot
->
leadershipMarker
,
std
::
chrono
::
milliseconds
(
500
)))
{
if
(
snapshot
->
term
!=
state
.
getCurrentTerm
())
{
// Ouch, we're no longer a leader.. start from scratch
return
this
->
service
(
conn
,
req
);
}
// If we've made it this far, the state machine should be all caught-up
// by now. Proceed to service this request.
qdb_assert
(
snapshot
->
leadershipMarker
<=
stateMachine
.
getLastApplied
());
}
// If we've made it this far, the state machine should be all caught-up
// by now. Proceed to service this request.
qdb_assert
(
snapshot
->
leadershipMarker
<=
stateMachine
.
getLastApplied
());
}
if
(
req
.
getCommandType
()
==
CommandType
::
READ
)
{
// Forward request to the state machine, without going through the
// raft journal.
return
conn
->
addPendingRequest
(
&
redisDispatcher
,
std
::
move
(
req
));
}
...
...
Write
Preview
Supports
Markdown
0%
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment