Benchmark Trilinear interpolation

I do a performance benchmark with a function that takes a lot of CPU time in my game which is a Trilinear interpolation function. This is used while generating voxel world.

Results (x86) for a 256x256x256 voxels array on Core I7 at 3.60 Ghz :

  • BlitzMax (GCC -O3 optimization) -> 112 ms
  • C++ (VS2015) -> 203 ms
  • C# .Net Native  -> 203 ms
  • BlitzMax (GCC -O2 optimization) -> 215 ms
  • C# .NetCore v2.0 -> 281 ms
  • C# .NetFramework v4.6.1 -> 422 ms
  • C# Mono 4.4 (-O=All) -> 625 ms

Results (ARM) for a 256x256x256 voxels array on Raspberry Pi 2 :

  • C++ (GCC -O3 optimization) -> 2762 ms
  • C# Mono 4.6.1 (-O=all -O=float32) -> 10702 ms

.Net Native for windows universal application is as fast as C++ and OpenSource .NetCore is very promising…

GCC is faster than MS compiler for my bench, good news for BlitzMax.

Source Code Here


2 thoughts on “Benchmark Trilinear interpolation

  1. Hi 🙂
    The default optimization for BlitzMax NG is -O2 (yes, it probably needs changed).
    You may find you get a somewhat improved benchmark if you use -O3 instead. (and I’d expect the same for your C++ benchmark)

    You can change the default optimization level by editing make.bmk in the bin directory (search for -O2).
    Or you can override the optimization per application by creating an .bmk file in your app directory and adding the line “setccopt optimization -O3” (without quotes).


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s