Part 1 Recap
Previous post can be read here, but to summarize: We ran what is basically the same code through both frameworks, out of the box, which I think is a pretty good representation of how a lot of people deploy their projects. (to be perfectly clear, I didn't say it was right!) Slap some code, do some things and bam! The code I am using is a simple controller method that queries a Mysql database to return all posts from the table, sort those queries then pass them to the view. In the view, I use each frameworks native template engine to create the partials and master layout pages. We make a single method call in the view, which we could all say is bad practice, but really has no impact (I don't think, as of yet) on performance. So, bad practice… slap on the wrist! Basically, I am loading the index page of this site.
So, that was the test. The results of said test were:
- 961 function calls
- Performed in 2050ms
- 57 function calls
- Performed in 70ms
To be honest, I really felt bad about the performance of Laravel and reran the tests again, to make sure that something wasn't running on my laptop at the time of the tests. Upon re running the tests, performance times did drop to 961 functions performed in a range of 680ms to 850ms. So, clearly something was going on in my laptop at the time of the first test. So, I reloaded the page a few more times, each time reloading WebGrind. From this point forward, each change will dictate that a least 3 page loads will be performed.
Part 2 - The Real Work Begins
Now, to get to the reason for this Part II of the series. I know I said that we would be running my next tests with Laravel on top of HHVM in a Vagrant box, but Maks Surguy contacted me on Twitter and asked a very good question -
@unisys12 cool but did you run 'php artisan optimize' before running these tests?— Maks Surguy (@msurguy) April 14, 2014
Honestly, my answer was no. Why? Even better question. I had completely forgotten about that command. So I quickly checked the Laravel docs and found.... nothing. WTF! So I turned to every hobbyist best friend, Google, and quickly found this article on Optimizing for Production with Laravel, by Andreas. Also, checking the website for Composer yeilded some good info on the topic.
For ‘php artisan optimize’ to work in your Laravel project :
- First set debugging to 'false' // app/config/app.php
- Next, from the terminal and in the root of your project, run 'php artisan optimize'.
You should see a message that says that it is dumping the autoloader followed by a message stating that it is generating an optimized class map. If you see these messages, your all good. At this point, Laravel uses Composer to perform some voodoo behind the screens when you type that command. It takes the flexible and pretty PSR-0 autoloader, generated by Composer and converts to a 'classmap' which is faster. Andreas does a great job explaining how autoloading works in Laravel and why the above mentioned command is so important, so I will just quote him.
Andreas (Optimizing for production with Laravel 4)
PSR-0 is nice for development because it lets you add a class and it will instantly be found by the >autoloader - as long as you've namespaced it correctly and named the file correctly, of course. In >production, you usually don't want to use PSR-0 autoloading as it has a bit of overhead. Use composer dump-autoload --optimize to re-compile all your PSR-0 autoloading rules into classmap rules, which are faster.
Performing the above command shaved a whopping 200ms off the average page load times. Tonight, after running the base tests again, since this laptop seems to a be a very volatile thing, base runs were in the 680ms - 850ms range. I don’t care what framework you are working on, 200ms is a big deal. No question! That puts the project now running in the high 400ms area to low 500ms area. Either way, awesome results. Combined with HHVM, she might just have a fighting chance!
Employing this in Phalcon
I mentioned in the previous post that I was manually loading my packages in the Phalcon autoloader, as shown here:
$loader->registerNamespaces( array( 'JeremyKendall\Password' => __DIR__ . '/../../vendor/jeremykendall/password-validator/src/JeremyKendall/Password/', 'JeremyKendall\Password\Decorator' => __DIR__ . '/../../vendor/jeremykendall/password-validator/src/JeremyKendall/Password/Decorator', 'JeremyKendall\Password\Storage' => __DIR__ . '/../../vendor/jeremykendall/password-validator/src/JeremyKendall/Password/Storage' ) )->register(); $loader->registerClasses( array( 'ParseDown' => __DIR__ . '/../../vendor/erusev/parsedown/Parsedown.php' ) )->register();
For large packages or lots of packages in a project, this could turn into a pain fast! "How to solve this?", you ask. Very valid question.
Easiest method is to require the "vendor/autoload.php" in the projects index.php file and just comment out all the above code. From my test’s, this added about 21ms to overall execution and generated an additional 14 function calls out of the box. But what if we run the same --optimize option to ‘dump-autoload’ on our Phalcon app? Funny you should ask.
- Adds 30 some odd function calls (tonights base was 59 function calls)
- Adds only 1ms to overall execution time (tonights base was 73ms - 89ms)
Holy cow! Thats not too darn bad!! We just brought ourselves the simplicity and ease of use to our Phalcon project that Composer does sooo well and was able to do it in a way that did not hurt performance in the slightest.Heck! We increased the performance of both apps. Win! Win!
But wait, there’s more!
There is even more I could do to speed this up. Since working with the Model, in both instances, is the most intensive - I could simply employ caching and this would probably cut the performance even further. Awesome! That will be another post with another round of tests, so… this is turning into a, “How do I incrementally improve the performance of my app?” sorta thing, so… we will run with it.
Yeah, it’s coming. Believe it or not… I really hate to say this, but I am having a rough go of it. I have built a few vagrant boxes, but server adminy stuff is not my cup of tea. I am learning though, so I will keep pushing forward. PupHpet is having issues, until v3.5.1RC1 is done and ProtoBox just does not like me, I think. I managed to build a few things, but no matter what I changed in the hosts file, I could not access my vhosts. So I am looking into that. Think I am just gonna have take a big bite, put my big boy britches on and learn how to kick some server butt. Until then though… Cans of whoop-ass, served on a cold plate of humiliation will be whipped up daily!
Thanks again to Maks Surguy for being a real stand up guy and giving me some helpful advice. Ya’ll go check his site out. It is loaded with tons of info on web dev. Hell, I have been visiting his main blog site for a least two years now. Cannot tell you how much I have learned from him. Oh and he has a killer site called, Laravel Tips & Tricks were the Laravel community can share tricks and stuff. Great place to learn even more about Laravel.