Updated simulation tool benchmarking study

Updated simulation tool benchmarking study

I presented a paper at the eSim conference in Hamilton Ontario in May 2016 looking at simulation times across a range of computer types and model types going back as far as kit from 2000. It discovered a number of patterns users might use to increase the efficiency of their work. I have updated that study with fresh versions of ESP-r and EnergyPlus in a new matrix which includes:

a) nine computer variants (including virtual computers)

b) two models (3 zone 40 surface lightweight and 13 zone 435 surface high mass)

c) four & twenty timesteps per hour

d) one week, two month, four month and annual assessments

e) saving performance data hourly and at each timestep

f) save lots of performance data (i.e. 21GB) or a subset

g) un-optimized vs optimized versons of software

h) the impact of different solvers (in EnergyPlus)

i) pre-simulation tasks such as calculating viewfactors

j) post-processing tasks

It also looks at the impact of different working practices such as the order of simulation assessments and data extraction and sequential vs parallel tasks and the risk of tasks becoming disk-bound.

Check it out at: <http://contrasting.no-ip.org/ESP-r_tour/timings.html> I will also be updating the simulation tool comparison website which JUST PASSED 400 visitors!)

<http://contrasting.no-ip.org/Contrast/Index.html>

with these benchmark tables in the near future.


To view or add a comment, sign in

More articles by Jon Hand

Explore content categories