C# performance curiosity -


really curious below program (yes run in release mode without debugger attached), first loop assigns new object each element of array, , takes second run.

so wondering part taking time--object creation or assignment. created second loop test time required create objects, , third loop test assignment time, , both run in few milliseconds. what's going on?

static class program {     const int count = 10000000;      static void main()     {         var objects = new object[count];         var sw = new stopwatch();         sw.restart();         (var = 0; < count; i++)         {             objects[i] = new object();         }         sw.stop();         console.writeline(sw.elapsedmilliseconds); // ~800 ms         sw.restart();         object o = null;         (var = 0; < count; i++)         {             o = new object();         }         sw.stop();         console.writeline(sw.elapsedmilliseconds); // ~ 40 ms         sw.restart();         (var = 0; < count; i++)         {             objects[i] = o;         }         sw.stop();         console.writeline(sw.elapsedmilliseconds); // ~ 50 ms     } } 

when object occupies less 85,000 bytes of ram , not array of double created, placed in area of memory called generation 0 heap. every time gen0 heap grows size, every object in gen0 heap system can find live reference copied gen1 heap; gen0 heap bulk-erased has room more new objects. if gen1 heap reaches size, there reference exists copied gen2 heap, whereupon gen0 heap can bulk-erased.

if many objects created , abandoned, gen0 heap repeatedly fill up, few objects gen0 heap have copied gen1 heap. consequently, gen1 heap filled slowly, if @ all. contrast, if of objects in gen0 heap still referenced when gen0 heap gets full, system have copy objects gen1 heap. force system spend time copying objects, , may gen1 heap fill enough have scanned live objects, , live objects there have copied again gen2 heap. takes more time.

another issue slows things in first test when trying identify live gen0 objects, system can ignore gen1 or gen2 objects if haven't been touched since last gen0 collection. during first loop, objects array touched constantly; consequently, every gen0 collection have spend time processing it. during second loop, it's not touched @ all, though there many gen0 collections won't take long perform. during third loop, array touched constantly, no new heap objects created, no garbage-collection cycles necessary , won't matter how long take.

if add fourth loop created , abandoned object on each pass, stored array slot reference pre-existing object, expect take longer combined times of second , third loops though performing same operations. not time first loop, perhaps, since few of newly-created objects need copied out of gen0 heap, longer second because of work required determine objects still live. if want probe things further, might interesting fifth test nested loop:

for (int ii=0; ii<1024; ii++)   (int i=ii; i<count; i+=1024)      .. 

i don't know exact details, .net tries avoid having scan entire large arrays of small part touched subdividing them chunks. if chunk of large array touched, references within chunk must scanned, references stored in chunks haven't been touched since last gen0 collection may ignored. breaking loop shown above might cause .net end touching of chunks in array between gen0 collections, quite possibly yielding slower time first loop.


Comments

Popular posts from this blog

css - Which browser returns the correct result for getBoundingClientRect of an SVG element? -

gcc - Calling fftR4() in c from assembly -

Function that returns a formatted array in VBA -