How long
Contents |
How long does it take...
- To install a specific module?
- To install the ISO?
- To recompile all my modules?
These questions get asked a lot by new lunar users and it is a very logical question: Lunar is after all a source distro and requires you to compile all the applications from source, which can sometimes take an extremely long time!
Influencing Factors
- System(CPU) speed
Double the CPU MHz, and you double the amount of instructions that a system can make per time unit. So, as a rule, a faster CPU results in faster compiles.
- Memory size
Compiling may require large amounts of memory, especially when compiling C++ applications. Generally, a compilation will go 50% faster if you have double the memory. So, if you only have 128MB of memory, you will speed up your compiles by 225% by increasing the amount of memory to 512MB. This might be worth the trip to the hardware store.
- Idle time
Compiling takes a long time. Every second that your system is doing something else, especially interactive tasks, reduces the amount of time that your system can compile applications.
- System architecture and components
Needless to say that a compile on a SATA hard disk is a lot faster than on a ATA-33 system. Especially with larger modules this is a very important factor.
- Parallel make jobs
If you enable multiple parallel make jobs using lunar / Options / Optimize Architecture and you have a machine with more than one processor or core, most modules will build in a lot less time. Similarly if you have multiple machines and make use of distcc to share the build processes among your systems.
Some basic numbers
The following numbers are from a Pentium 4 system running at 2.9 GHz, with 1 GB of memory, which /proc/cpuinfo rates at 5864 bogomips. This was a fairly normal system back in 2005 and if you wish to compare it with your own system, you should keep in mind that this system might be relatively slower or faster based on the influencing factors mentioned earlier.
The modules listed below are very common modules, so they should give you some good insight in how long it takes to compile and install something. Based on the size you can make rough estimates for new modules that you wish to install.
module source size installed size compile time gcc 61540 KB 97188 KB 80m35s glibc 15340 KB 138268 KB 32m47s gtk+-2 18672 KB 64884 KB 15m56s perl 14792 KB 56844 KB 9m43s
However, it is not easy to see whether there is any correlation between source size, installed size and compile time from such a small sample. Not all modules are built and installed in the same way. The gcc compiler has a very elaborate bootstrap phase that takes almost half the time it takes to compile the target compiler. The glibc shown here includes all possible locales. gtk+-2 comes with a lot of bitmaps and documentation. Many many module-specific factors are at work here.
Estimating compile time
If we plot the compile time against the size of the installed product or the source code size (in compressed form easily available), we get a very loose correlation shown in the graph below, where the straight lines are produced using gnuplot's fit function:
http://lunar-linux.org/~engelsman/how-long/module-compile-times-raw.jpg
The plot shows that there are an awful lot more small modules than large ones in the 400+ modules installed on this machine, but the range of values obscures any details. If we plot the same basic data using LOG values, we get the following graph:
http://lunar-linux.org/~engelsman/how-long/module-compile-times-raw.jpg
As you can see there is a faint correlation possible using either installed size or source size, but it's more interesting to predict compile times based on source size because once we have the installed size we already know the compile time anyway. Where the source size line crosses the x-axis, LOG(Compile time) is 0.0 and LOG(Source size) is almost 3.0, so Compile time is 1s and Source size is approx 20KB! This gives you a very crude compile speed, and allows you to pre-estimate the compile time, assuming that your system is the same speed as my reference system.
We can do some more elaborate math on this and calculate the "index" for all modules. The index of a module would be easily calculated by the amount of time it took for lunar to compile and install 10kB of compressed source code. So we divide the total compile/install time (seconds) by the number of kilobytes that the compressed source package was:
gcc took 80m35s to compile + install 61520 KB of compressed source code, i.e. approx 13 KB/s glibc took 32m47s to compile + install 15340 KB of compressed source code, i.e. approx 8 KB/s gtk+-2 took 15m56s to compile + install 18672 KB of compressed source code, i.e. approx 19 KB/s perl took 9m43s to compile + install 14792 KB of compressed source code, i.e. approx 25 KB/s
TO BE UPDATED... If we do this for all modules we end up with the average index of about 27, which means that on average, lunar compiles and installs 27kB of compressed source code per second. Since the size of my /var/spool/lunar is about 1.4gB, it would take me about 51851 seconds to compile and install all of this source code files. That's about 14.5 hours! Of course that number really is not right - we haven't taken into account that larger modules might have a lower index and thus take much longer to compile.
Indexing modules
We should be able to index the numbers of all modules right now, so it pays off to pay a closer look at the numbers a bit. Most importantly, the 27 index is averaged across all modules, but we know that this number is off for bigger modules. For modules larger than 100kB source size, it's already 32, and for modules over 1000kB, it's up to 52. This seriously changes perspective and shows you that smaller modules take relatively longer to install, and this is not surprising: the overhead is usually much larger for them, as all the autoconf and administrativa needs to be performed for smaller modules as well.