Batch resolving of promises
Contents

2) Batch resolving of multiple promises When I say "relationships" I'm trying to approach this from a data driven standpoint, and not program flow per-se. I don't want to use the then() keyword to trigger resolving of promises, and I don't want to keep track of the promises in a sea of closures when they resolve. A Promise containing more Promises, containing more Promises is a good way to specify relationships between promises. It's not that simple, you still need some program flow when creating promises, but this is nothing that can't be solved with a getPromises() method and some recursion. A Promise defines a set of Promises to be resolved. A news item would define a comment count promise and return it here. We stray from the traditional use of Promises here. Using Promise objects in this way gives us the ability of batch resolving promises whereas you don't get that ability from using Promises when you're implementing common Promises programming patterns. And we're maintaining the data relationships between them. All that is left to do it resolve the promises in the final data tree. My approach was to traverse the tree and reference it by class name in a final list. This way it was possible to resolve a list of promises using a resolveAll($promises) method defined in a specific promise class. This is the batching function which takes all the promises of the same type and resolves them using one function call. This function takes care of fetching the data and resolving promises. You would do this in MySQL by using a query with the SET type, or you could use memcache::get or redis::mget. You can check out my attempt at a solution here: https://github.com/titpetric/research-projects/tree/master/php-resolve So, while the landing page would still need significant refactoring, this is a step in the right direction. The resulting data tree is perfect because it is resolved with no data duplication and the maximum amount of batching. Whatever data source you use, chances are it would only add one SQL query to get all results. And optimizing one SQL query call is much easier as having to optimize 20 of them over your complete application stack. It is also so nice to reduce the number of SQL queries you're working with in case you need to implement sharding, moving the database or some other data management changes. Additional thoughts: The approach is sequential and you're given your data tree directly after execution of the resolve / resolveAll calls. There exists an opportunity to fetch data asynchronously, depending on the source of your data. If you're consuming API responses over HTTP, SQL queries over MySQL or any kind of data over a non-blocking connection, the resolving could be adapted to take advantage of this. Fetching the data in such a way is a nice optimization, but it needs to be implemented over your complete MVC solution to really take advantage of the benefits. The goal is to come as close to possible to complete coverage, so none of your data calls get duplicated. There is some thought that needs to be put into how your MVC framework can live with this data model, and where it should be avoided. The thing to keep in mind is, that this is basically an efficient model for fetching data while keeping relationships between data. This is somewhat a superset of DAO / DAL logic, since it approaches this data from a global viewpoint, and not a specific data structure viewpoint. p.s. a significant pitfall here is also the PHP engine. I'm sure the performance could/would increase dramatically if this was running in a JVM. While the benchmark is not bad, the 95th percentile shows significant overhead in the initial runs, before PHP does some of it's pre-allocation magic to speed things up.