<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Oct 10, 2016 at 10:57 AM, Paul Bone <span dir="ltr"><<a href="mailto:paul@bone.id.au" target="_blank">paul@bone.id.au</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">> If a page is "nearly full" then it is skipped, on the assumption that it<br>
> takes a lot of time for little benefit (speed/space tradeoff).<br>
<br>
</span>That's a reasonable assumption. Has anyone tested different policies here?<br>
As I understand it, sweeping should be relatively fast, especially for mostly<br>
full pages. It starts by checking the mark bits (a medium-sized but<br>
sequential memory read) and finding any unmarked objects it threads them<br>
together in a linked list (writes to a number of different cache lines).<br>
<br>
SweepCost = ReadMarkBits + N*WriteLinks<br>
SweepBenifit = N<br>
<br>
Sweeping a mostly full page is the cheapest sweep, but has little benifit.<br>
Sweeping a mostly empty page has a higher cost (but writes are amortized if<br>
there is more than one per cache line) and a great benifit. It seems that<br>
skipping mostly full objects mostly reduces how frequently sweeping is<br>
required, and only affects the cost/benifit of sweeping slightly. Please<br>
tell me if you disagree, this is the most I've actually worked with memory<br>
management.<br></blockquote><div><br></div><div>It looks as if this feature was added in version 5.0, and then the implementation changed to the current one in version 6.0.</div><div><br></div><div>Those are quite some time ago (I couldn't immediately find a dated release list, but I think 5.0 was around 2000?), and computers have changed a lot, so the same assumptions might not apply now. It might be worth someone revisiting it.</div><div><br></div></div></div></div>