Spark DataGroup Object Pooling
Hey,
I've created a Tree component extending the Spark List, and it was so easy! Love the SkinPart architecture.
Now I'm facing a big performance issue.
Every time I open a branch of the tree, the DataGroup has to create the item renderer from scratch, even though I have "useVirtualLayout = true".
This is because "useVirtualLayout" only works if:
a) All of the item renderers are the same (ie. there is no itemRendererFunction)
b) if there are more items than there is visible space.
My tree a) has multiple types of renderers, and b) has more than enough space to fit everything, so I get none of the benefits of the built in renderer caching....
I would like to cache the renderers even if there is excess visible space (so if I open the root node, and it creates 3 child nodes, and there is room for 20 child nodes, and then I close it, it still saves those created child nodes in the "freeRenderers" cache so the next time I open it, it can just "pop()" them from the stack).
In addition, I would like to be able to cache node renderers that are of different types.
To accomplish this, I would need a more robust Object Pool solution than the built in DataGroup "freeRenderers" solution. I'm wondering, would adobe like to create such a system? If you could define something like a "cachePolicy" on the DataGroup, where "cachePolicy = new CachePolicy()", and the CachePolicy class defines an array of CacheItems like:
- CacheItem
-- class factory name
-- cache number (how many to store)
Then it would be very easy to customize how the DataGroup could cache renderers.
Then by default, it would do what it does now. If you customized it with your own CachePolicy object, you'd have to manually clear it and whatnot, which wouldn't be a hassle at all.
Any ideas or better solutions on this front?
Thanks,
Lance
