CPU evolution, an increase in complexity

Moore’s law has driven computing power for years and will probably continue for sometime even though it ends is coming. However, if during the first 20 years the increasing power was directly related to the frequency, it’s no longer the case. During this time, changing a cpu was directly had a direct impact on existing software: twice the frequency, twice the power.

CPU manufacturers then added extra instructions (mmx, sse…) to increase efficiency but their use was not the main concern at least in the science field. At most, these instructions were used through auto-vectorization from the compiler. The frequency race was still going on and the necessity of using these instructions was probably not necessary.

In the beginning of the 21th century (2006), multi-cores CPU came on the market and for the first time, frequency was not a good indicator of the computing power available any more. The first consequence is that a new cpu does not necessarily mean faster programs. Software need to be designed for this architecture and the modifications are not trivial. I thought that cores will likely increase over time and if the software was correctly designed for parallel execution, a new cpu will still mean faster programs. However, I would have expect large scale availability of 8 cores processors for this year as the first dual core was released in 2006 and the first quad-core around 2008. Vectors instructions are, however, improving over time with the new 256bits wide vectors released in 2012.

In parallel, we are seeing the come back of co-processor boards. It started with the graphics used as computing units and now Intel is coming in the play with it’s new xeon phi. These units have outstanding power within them but the complexity added for their efficiency to be used will be I guess a very hard challenge especially in science where we are not professional programmers. The xeon phi is supporting 512bytes vectors (they can operate on 16 single precision numbers or 8 double precision numbers) while graphic cards also have stream processors.

Today, the use of vector instructions cannot be avoided as the potential speed up is substantial. Multi cpu system might come back as a response of the lack of improvement on multi-core systems if they are affordable. The next few years will be the place for the war between the graphic cards and other dedicated computing units like the xeon phi. Nevertheless, vector instructions will likely get more and more important with their presence in CPU, graphic cards and the new xeon phi.

In conclusion, the complexity of algorithms are increasing a lot to fully used the computing power available. A new bottleneck might appear where the power is here but only very few scientists are able to use it. As computer programming and architecture is not teached in scientific cursus at university, software development development might become more and more difficult.

Leave a Reply