Challenging OroCommerce: A performance test

I'm usually not an adept of "unwrap & test" content. You know. When somebody buys a shiny new toy and post a video or an article about it on the net to tell the world of its greatness. But that's not true… when it's about new software reviews 😃  

So you can imagine that for a Magento enthusiast like me, when I first heard of OroCommerce, I was kind of curious. And I went straight to Google to find some testing videos or feedbacks  about this new platform for B2B that's been all the rage lately. Unfortunately, I couldn't find performance benchmarks on OroCommerce, so I decided to take some time to perform extensive load testing and performance tuning with the provided demo store, and to publish the results in an objective way so others would benefit from it.

I personally had a lot of fun doing this (did I ever mentioned that load testing was one of my favourite hobbies?), and the tests yielded really interesting results and insights about OroCommerce performance and scalability.


OroCommerce's performance straight out of the box

I started by spinning-up an AWS instance (more info about my setup in the Appendix below), and installed all the required web server stuff and OroCommerce from the source (again, all the details are in the appendix at the end of this blog post). A few minutes later, just like that, I had a running website usable for load testing. Pretty neat.

I was curious about what the performance would be with the basic system configuration, so I performed a first load test to see how it goes.

  Load test OroCommerce in basic configuration

Load test OroCommerce in basic configuration

The website handled a maximum of 384 hits/min (around 6.5 hits/sec), and we can clearly see on the illustration above that the loading times aren't impacted too much until we reach the concurrency limit (around T+7’). After that point, the load times start to increase linearly with the number of concurrent requests, which would represent a degradation of user experience on a real-world ecommerce store.

At T+21’, the website wasn't able to serve the 4 configured pages in less than 60 seconds, which explains the red zone at the end of the test.

Those results were interesting, but I was mostly concerned about what the bottleneck was. I noticed that when we reached the concurrency limit, the CPU usage was nearing 100% and the load average per core was around 2 (which is logical because I configured PHP-FPM to spawn 2 times more workers than the number of CPU cores available on the virtual machine).

But now comes the good part. I took a look at the application chart of my homepage to see where the time is spent when the website is handling high loads.

  PHP time usage

PHP time usage

As you can see above, this is very interesting because the PHP processing time did not change so much as load increased (it did increase a bit when we reached the concurrency limit because the CPU was overloaded). Only the time outside PHP was impacted, in a quite linear fashion: in this case, it represents the time spent in queue, waiting for a PHP-FPM worker to be available.

It means the application was not the bottleneck and would've been able to handle a lot more hits/sec without performance impacts if I used a more powerful server. So this is truly great news for OroCommerce's scalability!

Before going any further on scalability testing, I decided to optimize my system configuration to get the most I could from my setup.


Optimizing Opcache configuration

In many PHP applications, the biggest quick-win for performance optimization is to tweak the Opcache configuration. The Opcache is a PHP module that keeps all the PHP files in memory, in opcodes form (a sort of machine-optimized form for PHP instructions), in between requests, so that it greatly reduces the time needed to read the files from the disk and to parse them.

Tuning the Opcache configuration often results in performance boost from 20 to 30% but I reduced the average load time from ~400 msec to ~230 msec per page, which is a ~40% performance gain! Please see the appendix if you want to look at the settings I changed to get that performance boost.

  Performance results after Opcache configurations

Performance results after Opcache configurations

I didn't expected to gain so much performance with this setting. So it was the perfect opportunity for me to profile the application and do my homework on the Symfony2 framework (OroCommerce is built on top of Symfony2).

It appears Symfony is using the Twig templating engine to render the pages. Behind the scene, Twig transforms the template files to intermediate PHP files which are kept in cache and executed to render the page. Those intermediate cache files highly benefit from Opcode caching. Also, the OroCommerce application is quite large and composed of many PHP files, so the performance benefits greatly from this caching system.

And then it was time to see how well it performed, under higher load, by running a new load test (yay!)!

  Load test OroCommerce after optimizatio

Load test OroCommerce after optimizatio

This time I reached 717 hits/min (that’s ~12 hits/s), that means I nearly doubled my hosting capacity!

The other metrics were very similar to the first load test and my conclusion remains the same: the application still handles the load very well, and the server compute capacity is still the bottleneck.

After that, I tried further optimizations on system configurations which haven't been proven to grant better performance but are still interesting.


Other optimizations I tried


I continued system configuration optimizations by tweaking a bit the MySQL settings since defaults are quite low for production setups on today’s hardware. I basically augmented the amount of memory MySQL allowed to be used (again, see details in the appendix).

It appeared that those modifications had no impact at all, but since the bottleneck was the CPU I’m pretty sure I wasn’t generating enough load on MySQL to see the benefits (MySQL had to handle around ~500 queries / sec at most, which is pretty low). I’ll definitely work on this on my next blog post about OroCommerce scalability. ;)


I also tried to use Redis as a cache backend for OroCommerce instead of the standard filesystem cache. I configured the application to use the phpredis extension and unix socket to communicate with Redis, which are known to be the best performing options available.

I was quite surprised by the results but measurements don’t lie! The setup with Redis as a backend cache happened to perform slightly worse than the standard filesystem cache (which is quite unusual in most setups I encountered on other ecommerce applications).

I didn't have the time to investigate the cause of this but I came up with several guesses:

  1. The server have plenty of free memory so it’s possible the operating system decided to keep the whole set of cache files in RAM for fast access, which is definitely faster compared to Redis because of the overhead of the network connection.
  2. By profiling the application and analyzing the requests received by the Redis server, I noticed many “EXISTS” queries which might not be needed since “GET” will return a nil value if the key doesn’t exists (there are valid use cases though). So it's possible that this part needs improving by tempering directly with the code.

If I find anything interesting, be sure that I’ll write a blog post on this subject. ;)

In the meantime, I’ll be happy to hear about your experience with Redis caching and Symfony. :)


There is another architecture optimization that often results in overwhelming performance boosts: full page caching.

The state of the art middleware for this is Varnish. It stands in front of the application and cache the responses if possible. However, such a setup seems impossible (or at least totally ineffective) without making changes to the application code.

Unfortunately, it seems that OroCommerce doesn’t provide support for Varnish right now but I’m do hope such a setup will be made available in the future, and I’ll be happy to conduct performance testing on this. :)



Orocommerce does provide decent performance from the get-go, but still highly benefits from a good Opcache configuration. With this specific configuration, the average page response time falls below 300ms, which is the sweet spot that maximizes user experience and, thus, conversion.

During the tests, the application stayed stable (no errors were returned at all) and the performance degradation was linear and directly coupled to the number of concurrent simulated users, which is good news about OroCommerce stability.

Finally, I noticed that the application didn’t suffer from scalability issues with this setup since the bottleneck was the CPU capacity of the server and the PHP processing time stayed stable during the load tests.

This is definitely something I will explore in a later blog post about OroCommerce scalability. So stay tuned!

If you conducted some similar performance analysis on OroCommerce you would like to share, or if you want to discuss the results, feel free to leave me a comment!



For those who want to reproduce the performance tests described here, here is the exact setup I used.

For starters, I conducted all my load tests using QUANTA's monitoring tool. It's a SaaS app designed to monitor and manage the performance of ecommerce websites, by simulating the behavior of users going through the classic sales funnel. The load testing feature of QUANTA adds more and more virtual users until the breaking point is reached.

The whole infrastructure consisted of a single AWS instance m4.xlarge (4VCPU / 16GB RAM / EBS standard SSD backed).

I was running the demo website on Debian Jessie with:

  • PHP 7 (7.0.18-1~dotdeb+8.1) installed from DotDeb repositories
  • PHP-FPM, configured with pm.static = 8
  • MySQL 5.6 from official MySQL repository
  • Nginx 1.10.3 from the standard debian repositories

I followed installation guidelines from

I ran the command "composer dump-autoload -oa" to optimize the autoloading of classes. Both the OroCommerce crontab and message queue consumer were running during the tests.

When I optimized opcache configuration I used the following config parameters:

  • opcache.memory_consumption=512
  • opcache.validate_timestamps=0
  • opcache.interned_strings_buffer=16
  • opcache.max_accelerated_files=30000

(The max accelerated files setting was chosen randomly, just by counting the PHP files in the application root. I checked there were plenty of free space before running the tests)

When I tweaked the MySQL config, I used the following config parameters:

  • skip-name-resolve
  • query_cache_size=256M
  • innodb_buffer_pool_size=8G
  • innodb_log_file_size=2G
  • innodb_flush_log_at_trx_commit=2

Here are the URLs I configured for the load test scenario:

  • Home: /
  • Category: /new-arrivals
  • Sub Category: /new-arrivals/lighting-products
  • Product: /new-arrivals/lighting-products/_item/500-watt-work-light