High Performance Computing

High Performance Computing

The most constant difficulty in contriving the engine has arisen from the desire to reduce the time in which the calculations were executed to the shortest which is possible.” -Charles Babbage (1834).

 

I find this quote from Babbage, a 19th century polymath, at once endearing, encouraging and amusing. Babbage developed the first mechanical device for doing calculations i.e. a mechanical computer, which he named “The Difference Engine”. I like the quote and I use it in talks because it illustrates the fact that from the moment people created machines to calculate they wanted them to do so faster. Babbage’s Difference Engine was capable of one calculation every six seconds. Today over 180 years in the future where our electronic machines have the capability to do trillions of operations per second,  we are preoccupied with the same endeavor….speed…how to coax it from our machines, how to use it and how to value it.

There is something intoxicating about speed. Fast cars or fast planes and fast computers have something in common. They are intricate powerful and beautifully engineered devices and there is a deep satisfaction in controlling them to harness their capability. Its gratifying to make a computer perform to its specifications and its getting more and more difficult to do so. Some of us have made a career of it in the discipline of high performance computing (HPC). When our applications run faster they produce information more quickly and they allow their human attendants to make more rapid and better informed decisions. There are several noteworthy trends in HPC that are salient in a review of the recent past. I recently gave a keynote talk in Dubai at the EAGE meeting on HPC in the Upstream. I made observations about HPC and discussed the impact on applications in the energy industry. Some of the topics I discussed were the continuing trend in on-processor parallelism, the emergence of new architectures, the growing difficulty of effective parallel computing, the lagging performance of legacy codes and the emergence of computational science as a discipline. Over the next few weeks I will present my thoughts on these points here as LinkedIn posts. 

Dear Vincent, maybe we could have a talk about BeeGFS & the potential speed-up of this file system? It would be a pleasure to dive into this topic. All the best Marco

Like
Reply

From Fernanda Foertter comment: Speedup really means "Now we can do more". To which I add: And we always want more don't we? Looking forward for the next posts

Like
Reply

Fast is almost never the end product. In fact it's a tandem meter attached to resolution. As soon as you get something working your next move is to again increase resolution. I've yet to meet someone who rested in the HPC sabbath please with what they've achieved in terms of speedup. Speedup really means "Now we can do more"

Let me add another example - I had the technology to do optimisation under uncertainty ten years ago, but hesitated. Why? Because I wasn't sure how to present the results. You have to distinguish between the effect of subsurface uncertainties, and the effect of your control parameters which you are optimising. The compromise in the past has been to do independent optimisations on maybe 3 different realisations. That could be understood. Recently I have done a full optimisation under uncertainty, with no limit to the number of subsurface models (paper coming soon at Middle East meeting in Abu Dhabi, work done with Baker). The main result is to show how the uncertainty S curve shifts as you optimise. But if managers struggle to understand uncertainty, how are they going to understand a delta uncertainty? The oil and gas industry is a very long way from a culture of risk analysis and decision making tools. Yet an intelligent well design which takes account of water breakthrough uncertainty is very valuable, especially in current climate.

Like
Reply

This is a multi-faceted and deep topic. Nigel Goodwin 's point is well taken; understanding how to properly generate the data must come before understanding how to perform any kind of analysis. Disregard this and you have the classic garbage-in / garbage-out scenario. Michele (Mik) Isernia 's comments are also insightful and certainly the direction of where HPC solutions should be headed. To me, this principle translates in our work into enabling workflows that are happening during the day, with an engineer at their desk. The performance and technology stack of the current industry legacy simulator prevents any kind of in-situ viz/workflow scenario. Some competitors are faster but require Big Iron (hundreds of CPU nodes in clusters) to get the work done in any reasonable amount of time. If you combine @Nigel and @Mik's views, you suddenly need a simulator fast enough to run interactively in real engineering workflows. Neither the legacy industry simulator nor it's competitors are anywhere close to what's required. ECHELON, however, has proven speed and accuracy capable of enabling such workflows on the desktop. With respect to tools : we are working on some transformational data analysis and visualization capabilities in-house as we speak. We demonstrated some of this capability at ATCE a few weeks ago in Houston, and there's is going to be a lot more coming up in the future.

Like
Reply

To view or add a comment, sign in

More articles by Vincent Natoli

  • NVIDIA Kepler to Ampere: ECHELON Performance Scaling Through Five Generations of GPUs

    The first results benchmarking ECHELON on the newly released NVIDIA Ampere A100 are out, and they make a powerful and…

    3 Comments
  • The ECHELON Advantage

    From the very inception of ECHELON, SRT’s goal was to create the fastest simulator in the world, using a holistic code…

    2 Comments
  • Seismic’s little brother comes of age: Reservoir simulation and HPC

    Eni, the Italian energy major headquartered in Milan, has made two recent announcements related to GPU computing and…

  • Three reasons to work at Stone Ridge Technology

    Over the last few months I’ve focused my efforts on recruiting activities which means sifting through hundreds of…

    8 Comments
  • Innovation, Opportunity and You

    I'm looking for a few exceptional people to augment my team to assemble the best and brightest in high performance…

    2 Comments
  • Charged up on Volta

    Over a period of four years and four different hardware generations, ECHELON has quadrupled its speed. Moore's law…

    5 Comments
  • Billion cell calculation

    The football field vs. the ping-pong table In a joint press release with IBM today (here), we announced the first…

    17 Comments
  • ECHELON: The vanguard of next generation technical software

    ECHELON easily tackles the largest models, runs faster and requires a fraction of the hardware footprint needed by CPU…

    7 Comments
  • Benchmarking NVIDIA P100 for compute bound performance

    Previously I posted NVIDIA P100 performance results for full ECHELON runs which are dominated by memory bound kernels…

    4 Comments
  • ECHELON and the NVIDIA P100

    Moore’s law performance scaling is back. A few months ago, in July, I published an article in HPCwire and here on…

    17 Comments

Others also viewed

Explore content categories