Friday, 23 November 2012

Using the gcc cleanup attribute

Section 6.36 of the GCC manual describes the rather interesting cleanup variable attribute.   This allows one to specify a function to call when the variable goes out of scope.

The caveat is that the cleanup attribute can only be used with auto function scope variables, so it cannot be used on function parameters or static variables.

So what about an simple example?  How about automatic free'ing on variables that go out of scope?  This way we can be lazy and do automatic garbage collecting without having to remember to free() each allocation. Now, I'm not recommending this is good practice, I am just using this as an example.

Below I define a macro autofree that we use on the auto variables that we want to garbage collect on.   When the variable goes out of scope __autofree() is called and it is passing the address of the variable.  GCC insists on the helper function to be passed as a pointer to the type of the variable being cleaned. To handle any particular type I used a void * argument __autofree() and then I cast this to a void ** to allow me to free the memory that the variable pointed to, so a little bit of legitimate slight of hand being used here.

 #include <stdlib.h>  
 #include <stdio.h>  
   
 #define autofree __attribute((cleanup(__autofree)))  
   
 void __autofree(void *p)  
 {  
     void **_p = (void**)p;  
   
     printf("free -> %p\n", *_p);  
     free(*_p);  
 }  
   
 void *myalloc(size_t sz)  
 {  
     void *ptr;  
   
     if ((ptr = malloc(sz)) == NULL) {  
         fprintf(stderr, "malloc failed.\n");  
         exit(1);  
     }  
     printf("malloc -> %p\n", ptr);  
   
     return ptr;  
 }  
   
 int main(int argc, char **argv)  
 {  
     autofree char *x = myalloc(32);  
   
     {  
         autofree int *y = myalloc(64);  
   
         printf("y = %p\n", y);  
     }  
     printf("x = %p\n", x);  
   
     return 0;
 }  

In this example, I malloc memory for x and then y.  Then y and then x go out of scope and the cleaner function __autofree() frees the memory in that order:

 malloc -> 0x1504010  
 malloc -> 0x1504040  
 y = 0x1504040  
 free -> 0x1504040  
 x = 0x1504010  
 free -> 0x1504010  

I'm sure there are other ways that this can be creatively used (or abused...); as it stands it is a GCC extension so it is not portable C in any shape or form.

Saturday, 17 November 2012

Benford's Law with real world data.

If one has a large enough real life source of data (such as the size of files in the file system) and look at the distribution of the first digit of these values then one will find something that at first glance is rather surprising.  The leading digit 1 appears about 30% of the time and as the digits increase to 9 their frequency drops until we reach 9, which only appears about 5% of the time.   This seemingly curious frequency distribution is commonly known as Benford's law or the first digit law.

The probability P of digit d can be expresses as follows:

P(d) = log10(1 + 1 / d)

..where d is any integer value 1 to 9 inclusive. So for each leading digit in the data, the distribution works out to be about:

   Digit   
  Probability
1
0.301
2
0.176
3
0.125
4
0.097
5
0.079
6
0.067
7
0.058
8
0.051
9
0.046

But how does this hold up with some "real world" data? Can it really be true?  Well, for my first experiment, I analysed the leading digit of all the source files in the current Linux source tree and compared that to Benford's Law:


So, this is convincing enough.  How about something more exotic?  For my second experiment I counted up the number of comments in each file that start with /* in just the C source files in the Linux source tree and again looked at the distribution of the leading digits.  I was hazarding a guess that there are a reasonable amount of comments in each file (knowing the way some code is commented this may be pushing my luck).  Anyhow, the data generated also produces a distribution that obeys Benford's Law too:


Well, that certainly shows that Kernel developers are sprinkling enough comments in the Kernel source to be statistically meaningful.  If the comments themselves are meaningful is another matter...

How about one more test?  This time I gathered the length of every executable in /usr/bin and plotted the distribution of the leading digits from this data:


..this data set has far less files to analyse, so the distribution deviates a little, but the trend is still rather good.

As mentioned earlier, one has to have a large set of data for this too work well.  Interesting this may be, but what kind of practical use is it?   It can be applied to accountancy - if one has a large enough set of data in the accounts and the leading digits of the data do not fit Benford's Law then maybe one should suspect that somebody has been fiddling the books.  Humans are rather poor at making up lots of "random" values that don't skew Benford's Law.

One more interesting fact is that it applies even if one rescales the data.  For example, if you are looking at accounts in terms of £ sterling and covert it into US dollars or Albanian Lek the rescaled data still obeys Benford's Law.  Thus if re-ran my tests and didn't analyse the size of files in bytes but instead used size in 512 byte blocks it still would produce a leading digit distribution that obeyed Benford's Law.  Nice.

How can we apply this in computing? Perhaps we could use it to detect tampering with the sizes of a large set of files.  Who knows?  I am sure somebody can think of a useful way to use it.   I just find it all rather fascinating.

Friday, 16 November 2012

Non linear characteristics in a draining battery on the Nexus 7

Measuring power consumption on low power devices really is not as simple as running tools such as PowerTop and then assuming the data is trustworthy.  I shall explain why.

With Ubuntu on the Nexus 7, the battery driver originally provided battery capacity in terms of percentage full, which lacked precision to make any sane power consumption estimates.   We tweaked the battery driver so that we could get battery capacity in terms of uWh from the bq27541 battery fuel gauge.  From this, one can measure the change in capacity over time and estimate the power consumed by the device.

For the Ubuntu 12.04 Precise release I wrote the lightweight power measurement tool "powerstat" to try to determine power consumption using the change in battery capacity.  Powerstat can gather changes in the battery capacity level and by using a simple sliding window on the samples it gives an estimate on the power being consumed over time.  With laptops that consume a lot of power this provides a reasonable rough estimate of power consumption.

The tweak to the Nexus 7 battery driver allows powerstat to work on the Nexus 7 running Ubuntu.  So how trustworthy is the battery data from the battery fuel gauge?  Is the data reliable if we repeat the test under the same conditions?  Do we get consistent readings over time?

For my first set of tests, I fully charged the Nexus 7 and then fully loaded the 4 CPUs with busy loops and then ran multiple powerstat tests; powerstat gathers samples over 8 minutes and estimates power consumption. It also calculates the standard deviation from these samples to give us some idea of the variability of the battery power measurements.    For each powerstat test I logged the battery voltage level, the % battery capacity (normalized to a range of 0..1 to make it easier to plot), the estimated power consumption (with its standard deviation) and then plotted the results:
With this test the machine is in a steady state, we are not changing the load on the CPUs, so one should expect a steady power measurement.  But as one can see, the battery gauge informs us that the voltage is dropping over time (from ~4V down to ~3.25V) and the estimated power also varies from 4.6W down to 3.3W.  So, clearly, the power estimate will depend on the level of charge in the battery.

I also measured an idle machine:

Again, voltage drops over time and estimated power drops too.  More interesting is that the estimated power measurement is not particularly smooth over time as shown by the plot of the standard deviation too.   We can therefore conclude that a lightly loaded machine has a lot of variability in the estimated power consumption data and this means we cannot realistically measure subtle power optimization tweaks made to the software as there is just too much variability in the data.

I re-ran the idle test over several days, running from the same fully charged state to a completely empty battery, and compared runs.  I got variability in the duration of the test (+/- 5%). Also, comparing estimated power consumption at the 100%, 75%, 50% and 25% battery capacity points also shows a lot of variability. This means one cannot get accurate and repeatable power estimations even when the battery is charged at specific capacities.

So next time somebody tells you that the latest changes made their low power device suck more (or less!) power than the previous release and their findings are based on data derived from battery fuel gauge, take it with a pinch of salt.  

The only reliable way to measure instantaneous power consumption is using specialised precision equipment that has been accurately calibrated.

Saturday, 3 November 2012

Counting code size with SLOCCount

David A. Wheeler's SLOCCount is a useful tool for counting lines of code in a software project.  It is simple to use, just provide it with the path to the source code and let it grind through all the source files.  The resulting output is a break down of code line count for each type of source based on the programming language.

SLOCCount also estimates development time in person-years as well as the number of developers and the cost to develop.  One can override the defaults and specify parameters such as costs per person, overhead and effort to make it match to your development model.

Of course, like all tools that produce metrics it can be abused, for example using it as a meaningless metric of programmer productivity.  Counting lines of code does not really measure project complexity, a vexing bug that took 2 days to figure out and resulted in a 1 line fix is obviously more expensive than a poorly written 500 line function that introduces a no new noticeable functionality.   As a rule of thumb, SLOCCount is a useful tool to get an idea of the size of a project and some idea of the cost to develop it.   There are of course more complex ways to examine project source code, such as cyclomatic complexity metrics, and there are specific tools such as Panopticode that do this.

As a small exercise, I gave SLOCCount the task of counting the lines of code in the Linux kernel from version 2.6.12 to 3.6 and used the default settings to produce an estimated cost to develop each version.


It is interesting to see that the rate of code being added seemed to increase around the 2.6.28 release.   So what about the estimated cost to develop?..


This is of course pure conjecture.  The total lines of code does not consider the code of some patches that remove code and assumes that the cost is directly related to lines of code.  Also, code complexity makes some lines of code far more expensive to develop than others.   It is interesting to see that each release is adding an average of 184,000 lines of code per release which SLOCCount estimates to cost about $8.14 million dollars or ~44.24 dollars per line of code; not sure how realistic that really is.

Anyhow, SLOCCount is easy to use and provides some very useful rule-of-thumb analysis on project size and costs.