A look inside the fastest supercomputer in Europe

What is now the fastest supercomputer in Europe was recently unveiled at a research institute in Jülich, Germany. The computer, named Jugene, is capable of a massive one trillion computing operations per second.
Here are some facts about the Jugene supercomputer:

  • Based on IBM’s Blue Gene/P architecture.
  • Computing capacity: 1 petaflop/second.
  • That equals the computing power of more than 50,000 PCs.
  • 294,912 processor cores.
  • Processor type: 32-bit PowerPC 450 at 850 MHz.
  • 144 terabytes of RAM.
  • Mounted in 72 racks.
  • Network bandwidth: 5.1 gigabyte/second with a 160 nanosecond latency.
  • Power input: 2.2 megawatts.

Wish we could get one of these for Pingdom. Couldn’t cost all that much, could it? 😉

Packing those CPUs tightly together

Each of the Jugene’s 72 racks has 1024 compute nodes, where each node has 2 gigabyte of RAM (totaling 144 terabytes for the whole system). This is what the compute nodes look like and how they are packed together:

For those of you who really want to dive into the nitty-gritty tech specs regarding Jugene’s setup, here’s more info.

Installing Jugene

Looks like there was some cabling involved in the installation process. A LOT of it…

This is most definitely a bit more complicated than plugging in your home computer.

Source: The Jülich Supercomputing Centre.
Suggested further reading:
Ten of the coolest and most powerful supercomputers of all time


  1. soo, when does this come available in PC World lol.
    i would love a one of these lol 😛 nowhere to put it though…
    shame, look forward to seeing in PC World a guess lmao 😛
    i would be soo lucky eh ?

  2. is it wrong if one feels high and a little turned on if told that the pics of towering tapered black boxes are actually of super computers?? and what if it gets stronger with more intimate pics, like pics of them getting installed and its private parts(compute nodes)?? 😐
    sigh. anyways. let me imagine jugene’s I/O and control planes.

  3. Perhaps they used 65nm chips because of the market shifting towards the 45 and 35nm fields.
    I’m sure when buying nearly 300,000 processors, older models (a year or two is old in the computer world, lol) run cheaper.

  4. There may be “better” chips now, but when it takes a year or two to research, design, and build, you use what is what is available, not what “might be”. Everything that “hits the street” is behind the technology curve. For example, look how many years for 64 bit systems were marketed after the 64 bit chip was available.
    “Woulda, coulda, shoulda” doesn’t put product on the market. Putting a stake in the the ground and going from there does.

  5. that thing must have a HUGE cooling system, i cant imagine how much a coolent infrastructure that large would cost…

  6. The CPUs are chosen for a reason, they are cheap. Bluegenes go for the many cheap (low power) processors rather than the high end, powerful ones. One big reason is that it is easier to cool. One of the other top machines uses the Cell BE processors, basically the same thing in PS3s, which is the opposite approach. It’s all pretty cool, visit the Top500 website to compare the systems. I got to use RPIs BlueGene/L, 16,000 processors, though I only got access to about 1200. They are fun to work on for sure.
    It’s crazy how fast these things become obsolete especially when they take a year or so to build. RPI’s went from 7th fastest to 34th fastest computer in the world in 1 year.
    And yes, it plays Crysis, better than you computer does.

  7. Anyone know why the processors only run at 850 mhz? I thought we had chips that run 4 times as fast as that nowadays.

  8. Looks like they need to send this union to Japan for miniaturization. I’d like mine to be no larger than a pack of smokes and the power and heat problems might need a tweek as well.

  9. This reminds me of photos of ENIAC which also took up a room. And now a tiny computer like a PICOTux has 150x as much processing power and is the size of an RJ45 connector. I can barely imagine what the next 50 years are going to bring.

  10. Hey fools, likely they started designing the boards a long time ago, thus is why the processor isn’t the very latest. Obviously this stuff is not off-the-shelf common motherboard crap you use at home.

  11. Didn’t it occur to anybody that cluster having some 300K cores and the processing power of “more than 50 000” PC’s is kinda shitty ?
    Or phrased otherwise, that it has “kinda” high overhead ?
    And that today it would be a lot more lucrative to use a bunch of disconnected nodes & properly distributed algorithms / data paths ?
    Well, it might be some are unable / unwilling to do it properly, so they are probably the only ones to care.
    But hey, the biggest & fastest mainframe is here, everyone just drop to your knees and worship this overpriced, energy hungry pile of shit.

  12. What problems do these computers solve. How are they used. Do they compute how many cars can travel on a bridge without falling in the water.

  13. phi-nix, June 14th, 2009 at 8:29 am “what the hell is it for???”
    They sell time for it. Say NASA wants to run some CPU intensive takes, interstellar modeling or something, NASA would pay them to run tests for 2hr/2day/a week.

  14. 1,000 4870’s can do more processing than the 294,912 cores these guys are using.
    But to be fair that is 800,000 stream processors. 🙂

  15. Neat, In 15 years I will be seeing one that is just as good but the size of my laptop for $6000 at bestbuy.
    In 20 years I will have bought one just as good for $1000 and it will be like a pad of paper.
    In 30 years I will have something twice as good implanted into my arm.

  16. @ares,
    I’m no expert, but that kind of overhead is part of any supercomputer as far as I’m aware. The power and resources required to manage those nodes is required, whether or not it’s abstract or all in the same room.
    Not to mention, the overhead of having separate sites is probably way higher than any ‘waste’ here.

  17. I bet they’re kicking themselves they didn’t talk to you first _ck_.
    I’m no expert but this probably took a little while to design and build, during which technology inevitably advances…

  18. Just to let you know its not petaflops/s it should just be petaFLOPS because FLOPS stands for floating operations per second. So the /s is actually incorrect

  19. That machine is PowerPC, not x86, so it will run neither Vista nor Crysis automatically. That being said, the incredible power of the Jugene supercomputer might be enough to simulate an x86 machine running Vista and Crysis, but another OS would have to be underneath to run the VM. Of course, the GPU would have to be simulated as well. That would be an interesting test, to say the least.

  20. Oh please don’t tell me they use these for scientific calculations for real. They has these bullshit procedures and accounts and access request for systems like this but in reality the admins just code pr0n dvds on them.
    What changed over the time that 10 years later only millionaires could afford personal toys like this, today you can put together a 16core lil power station for cheap costs and we should thank asia for that.

  21. Stupid – I just read here a lot of nonsense…
    Does it will play Crysis, does it will play Doom… does it run Windows?
    Come on dudes, are you really that stupid? Do you really think they will play shitty games on it – wasting horrible expensive time? It is for high troughput computing projects, like the well known SETI-Project (http://setiathome.berkeley.edu/) for example, or for one of such projects like the world grid (http://www.worldcommunitygrid.org/) have to offer.
    And honestly what the hell do you want with such a cluster?

  22. I think You CAN run Crysis on it, It doesnt a big Video Card, It can just cache everything through the normal ram.
    No worries :-p

  23. In answer to the distro question. It runs it’s own stripped down version of linux. After all it would a shame if kde crashed it.
    O to all those pointing out how it could be better. You might want to read about in other sources as to why it is as it is. The hardware is configured for a certain class of experiments. Distributed computing will be better for some situations but not all programs can be broke up this way without huge overheads being introduced.

  24. The BlueGene (L and P) was actually upgraded a couple of times. Increased the speed 700 to 850 if remember correctly, and also double the ram from 500MB to 1GB and then to 2GB.
    The cooling is actually just regular server room cooling. (Most other super computers need liquid cooling. Check out the Earth simulator…) The cpu speed was limited for cooling purposes.
    The blueGene is not very good for applications like SETI@Home since SETI@HOME is embarrassingly parallel and requires no inter-processor communication during processing. BlueGene is designed with a very fast network that makes it more suitable for more complex applications.

  25. So another supercomputer article without any reference to the operating system. If it would run Windows I bet you would let us know all over the place.
    And no, it doesn’t run a “stripped down version of Linux” but “SuSE Linux Enterprise (SLES 10)”
    91% of the top 500 supercomputers in the world run Linux today (www.top500.org)

Leave a Reply

Comments are moderated and not published in real time. All comments that are not related to the post will be removed.required