setVector and GPU render mode
Copy link to clipboard
Copied
Hello,
If i use the setVector method on a BitmapData in GPU render mode on a mobile device, it works very slow.
This changes if the render mode is CPU. The method is fast enough.
The problem lies in the fact that if I use CPU render mode the app i;m developing is getting very slow, while on GPU it works great.
setPixel32 is even slower,it even won't work with lock/unlock.
Where the problem lies? How can I bipass this since setPixel32 or setVector is a must have when working with bitmaps.
UPDATE:
I've tested alot, and it seems the problem lies not with setVector method but with copying transparent pixels
//
You can find the code here since i've asked the question o stackexchange too: http://stackoverflow.com/questions/21827049/gpu-vs-cpu-render-mode-adobe-air
Copy link to clipboard
Copied
Difference may be in that when you uploading new data to BitmapData via CPU - it's not require upload it to GPU. And when you upload data to GPU it's firstly come CPU and than uploads to GPU.
But CPU rendered much slower than GPU.
Copy link to clipboard
Copied
It may be so, but i'm stuck. I need GPU rendering, but I can't erase a brush from a bitmapdata using it because is so slow while on CPU there's no lag.
Maybe there are some steps i'm missing or a better workaround, that's why I'm curios
Copy link to clipboard
Copied
Firstly let me know your Air version. It's latest from Jan 28 from labs.adobe.com?
I asking everyone because only 2 from 5 use updated SDK.
Copy link to clipboard
Copied
Air version is 4.
The problem lies at bigger base bitmaps, like 1400x1400
Copy link to clipboard
Copied
If you change a single pixel in a large bitmap using GPU mode, the whole image has to be re-cached. If you’re continually doing that sort of thing, you can run out of GPU memory quite quickly.
If you’re using CPU, and lots of vectors, the shape you are drawing will take a long time to draw. That can be solved by doing a cacheasbitmap after you have finished making changes.
One other thing to try is Direct render mode. You probably know that it’s used when doing Stage3D or StageVideo, but it can also be used with just regular Flash displaylist objects. It ends up being much the same as CPU, only the changes you make get sent to the video memory very quickly.
Now, even that won’t perform too well if you are changing large areas of the stage. If you don’t have a choice, and need to change large areas of the stage, you will find that iPad 3 Retina won’t perform well enough. That can be solved by excluding the iPad 3 in the XML, by including this in the requestedDisplayResolution section:
requestedDisplayResolution excludeDevices=“iPad3"
Doing that will force the iPad to run at 1024x768.
Copy link to clipboard
Copied
I see.
Direct/CPU mode is out of the equation as the game is very slow using it since i work with alot of bitmaps.
Is there a way to force the GPU not to upload the full image like in CPU mode? Like, only smaller power of 2 areas of the image?
Copy link to clipboard
Copied
GPU don't have default pixel control over data. Just change 1 pixel and your texture must be re-uploaded again. In this canse you will performance issue and doesn't matter how big area you changed. 1x1 or 100x100. Main issue of Air developers - they don't understand hardware limitation It's like you want to fill 1 cup of tea using two cups and want avoid leaks. This is impossible
You need make app simpler.
Copy link to clipboard
Copied
Actually you're not right, since the code works on CPU mode fine. The main problem is lack of serious documentation on rendering modes, since http://help.adobe.com/en_US/as3/mobile/WS901d38e593cd1bac-3d719af412b2b394529-8000.html is quite patetic
Also when you know the issue you know the solution, i can split the big bitmap into smaller ones of power of 2, like 128 and reunite them at the end of edit.
So basically i need to make the app more complex
Copy link to clipboard
Copied
Can you explain a bit about your app? What exactly is it doing that requires that bitmaps are changing? You had not previously mentioned trying Direct mode, have you tried it, or are you ruling it out just because CPU mode is slow?
One thing you could try with GPU is to take your large bitmap and make it out of lots of smaller bitmaps. Then when the modified bitmap has to be uploaded it’s a smaller area.
Copy link to clipboard
Copied
Yes i will try that solution.
Thanks for the answers.
Copy link to clipboard
Copied
Hi Lucian,
The small block idea is probably your best bet.
Here is another idea, though I am not sure if it applies to your use-case or not.
Roughly:
Instead of altering your final ( large ) bitmapData by drawing directly into it, you could maybe draw only the areas that have changed or that need additional processing into a render target ( whose resolution could be lower than the final bitmapData ). And then combine the render target and the original bitmapData via a shader. Occasionally, you could also 'bake' the results into your source BitmapData.
Also, there might be a way to use the AGAL kil command, or the Context3D.setColorMask() to process only the pixels that need to be taken into account, for the drawing into render target or for the blending with the source bitmapdata.
Sorry, I can't be more specific.
In any case, that would require setting up a stage3D render pipeline ( incl. shaders ).
Copy link to clipboard
Copied
Well, to be honest what you are saying sounds very interesting. I will look into it.
Do you know some good tutorials on the subject?
Copy link to clipboard
Copied
Hi Lucian,
Unfortunately, I am unaware of any tutorials on the net I could recommend, though I am sure there are a few -- not on the subject of setting up render targets, specifically -- most likely-- but on setting up Stage3D + shaders.
One source I would recommend, though, if this is your first time setting up a Stage3D render pipeline, besides the code examples in the Stage3D Adobe reference, is this great book:
It's a quick read. Nicely presented and explained. I think in 3-5 days you could be up and running.
I would also browse this site. You might get some ideas:
Regarding using render targets to identify areas that have changed and need further processing:
- This is a fairly common practice when interactivity is key, and works well in some use-cases.
- For example, in Photoshop, there is a filter called "Liquify", with which you can stretch and compress an image in real-time. There are 2 ways to do this effect, live: one is to map the source -- undistorted-- image onto a grid of polygons, and start pushing and pulling the vertices. The other method -- same principle, but finer -- is to paint the compression field into a render target ( a texture ), temporarily, while you work in real time by 'painting' the effect ( note: the displacement vectors are based off of the gradient of the compression field ). Conceivably, you could also paint the vectors directly, as if painting a normal map. Once the user is happy with the look, then the user clicks 'apply', and the render target is used one last time to bake the displacements into the source image.
- Other example: I once worked on a game where the player could use a laser beam to scar all the buildings of a 3D city. The way this was done was that each building had its own render target ( usually a 256x256 ), into which the user could 'draw' ( even though the player never saw this render target, which was just a grayscale mask ). The render target was then fed into the building's exterior shader, which would use it to identify which areas to show the destruction effect into.
- Overall, the idea is to use a render target ( usually smaller that the source texture it will end up modifying ) to paint either a mask, or vectors ( like a normal map ), which is then used along with the source texture in the final shader to create the look you want.
Although I haven't had to use render targets for this particular situation, lately, I use render targets frequently, otherwise, to draw my assets into and apply shaders, and then draw the render target into the backbuffer, at the very end.
The reasons for this is explained here ( along with other tips 😞
Copy link to clipboard
Copied
Thanks alot
Copy link to clipboard
Copied
But right now I came up with a solution.
You said your image is 1400x1400. So you need devide it to 128x128 and put it by 11x11. So when you change small block - you don't need to reupload whole 1400x1400. You just update small 128x128 block. This may improve uploading speed by 5-8x.
Just try and back us with results
Copy link to clipboard
Copied
Lol We came up with the same idea in same time ) Bingo!
