Super computer update for the week of 6/18

Supercomputers were first introduced in the 1960s and designed primarily by Seymour Cray at Control Data Corporation (CDC). Years later, in the 1990s, machines with thousands of processors began to appear and by the end of the 20th century, massively parallel supercomputers with tens of thousands of “off-the-shelf” processors were the norm.

Supercomputers generally aim for the maximum in capability computing rather than capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a single large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can, e.g. a very complex weather simulation application.

On the other hand, capacity computing is typically thought of as using efficient cost-effective computing power to solve a small number of somewhat large problems or a large number of small problems. Architectures that lend themselves to supporting many users for routine everyday tasks may have a lot of capacity but are not typically considered supercomputers, given that they do not solve a single very complex problem.

So now that you have some more background on supercomputing, let’s take a look at what the blogs are saying:


Meet Sequoia, the fastest super-computer in the world

In the race to build the fastest computer in the world, America is back on top.

On Monday, a super-computer designed by IBM for the National Nuclear Security Administration (NNSA), took the first spot on the Top 500 list, a list that comes out twice a year ranking the 500 fastest computers on the planet.

It is the first time the U.S. has topped the list since November 2009.

The winning super-computer is called Sequoia, and it is housed at the Lawrence Livermore National Laboratory in Livermore, Calif.

Sequoia will be used to build complex models that let scientists test the nation’s stockpile of nuclear weapons without having to do nuclear testing in the real world.


Is Intel’s Xeon Phi a Game Changer for Supercomputers?

Chip creator Intel recently welcomed a new processor to its line of products on Monday via its appearance at the International Supercomputing Conference held in Germany.

Known as the Xeon Phi, the chip assists supercomputers. The technology provides for one teraflop of performance while only taking up one PCIe slot. That data was included in a company infographic comparing the chip to how things ran in 1997.


Supercomputers Need Standard Shot Glass to Measure Out Juice

The biggest challenge in getting to the next level of supercomputer performance – Exascale – is the massive amounts of electricity these systems will consume. On a smaller scale, energy consumption also inhibits HPC installations. The problem isn’t just getting enough plugs from your walls to the grid; it’s also the cost of electricity when you’re guzzling it in such massive quantities.


VIDEO: At Poland’s ICM, Supercomputers Fuel a National Research Agenda

Poland is one of the fastest-growing economies in Europe right now, and business and government leaders are determined to stimulate growth through innovation. ICM, a research institute affiliated with the University of Warsaw, does its own research in everything from weather prediction to quantum computing but also provides computational power for other researchers throughout Poland. Here’s how ICM works:


Supercomputers push record revenues in HPC server market

The high performance computing (HPC) technical server market saw revenues climb in the first quarter of 2012, with even bigger growth expected for the year as supercomputer spending booms.

The increase meant that revenues reached $2.4 billion at the start of 2012, up 3.1 percent from $2.3 billion in the first quarter of 2011.

According to IDC stats, the HPC server market will also beat its yearly revenues record set last year, with 7.1 percent growth to reach $11 billion in total.


, , , ,

No comments yet.

Leave a Reply