×

Discussion Board

Results 1 to 7 of 7
  1. #1
    Registered User
    Join Date
    Apr 2006
    Location
    Oulu, Finland
    Posts
    75

    Render OpenGL to a pbuffer and then to a bitmap

    I have an application that takes advantage of the openGL API to obtain some renders that I need to have as bitmaps.

    I am using a HW-accelerated-graphics device (N93, N95) so rendering to a bitmap it is not an option.

    I have been rendering using a pbuffer succesfully and then converting this pbuffer into a CWsBitmap.

    To do that I was generating the CWsBitmap like this:

    iPixmap = new CWsBitmap( iEikonEnv->WsSession() );
    iPixmap->Create( Window().Size(), DMode);
    Then I copy the pbuffer (previously and successfuly created) with:

    // Copy contents of Pbuffer surface to a native Pixmap
    eglCopyBuffers( EglDisplay, EglSurface, iPixmap );
    (I chose a CWsBitmap because this one was working on nokia example SimpleCubepbuffer)

    My question is:
    The openGL API says that I have to use a "native bitmap" as the third argument for eglCopyBuffers. Can I use a CFbsBitmap or CWsBitmap is my only option? Can I access the pixels on the pbuffer in any other way than the interface eglCopyBuffers?

    The best situation will be accessing straight to the pixels of the rendered image with a pointer instead of copying it to a bitmap and then calling DataAddress()

    If not possible, rendering to a CFbsBitmap is easier for me than to a CWsBitmap. Is it possible? Will it be much slower if so?

    Thanks in advance for the answers.

  2. #2
    Nokia Developer Moderator
    Join Date
    Feb 2006
    Location
    Oslo, Norway
    Posts
    28,689

    Re: Render OpenGL to a pbuffer and then to a bitmap

    You might want to check posts of Flame (http://discussion.forum.nokia.com/fo...er.php?u=49373), he is a big OpenGL guru here. You can narrow the search to pbuffer: http://discussion.forum.nokia.com/fo...archuser=Flame

  3. #3
    Nokia Developer Expert
    Join Date
    Mar 2003
    Posts
    68

    Re: Render OpenGL to a pbuffer and then to a bitmap

    Hi.

    >My question is:
    >The openGL API says that I have to use a "native bitmap" as the third
    >argument for eglCopyBuffers. Can I use a CFbsBitmap or CWsBitmap is my only
    >option? Can I access the pixels on the pbuffer in any other way than the
    >interface eglCopyBuffers?

    AFAIK, CFbsBitmap should work as well (should be easy to test).

    Pbuffer is a hardware accelerated render target which means that it can
    be stored in graphics hardware memory (e.g., behind external bus) and
    it may be stored in GFX HW dependent format/memory layout.

    For this reason there is no way to access directly the buffer as it
    might change from implementation to another and it might even be that
    the host CPU cannot physically access the buffer.

    There are two ways of getting that pixel data. The eglCopyBuffers( )
    that gets the pixels to a OS bitmap (that may be HW accelerated, but is
    not the case here) and another way is glReadPixels( ) that gets the pixel
    data behind a pointer provided by the application. Both of these force
    the HW to work in single buffered mode if invoked on per frame basis (as
    opposed to rendering to a window surface and not using glReadPixels()) => it
    can be that over half of the GFX HW performance is lost (as is the case
    with N93 and N95 for example).

    > The best situation will be accessing straight to the pixels of the rendered
    > image with a pointer instead of copying it to a bitmap and then calling
    > DataAddress()

    If you really need access to the pixel data, this is the fastest way
    currently.

    Jani Vaarala
    Graphics Architect
    Nokia
    Last edited by Flame; 2007-11-14 at 17:15.

  4. #4
    Registered User
    Join Date
    Apr 2006
    Location
    Oulu, Finland
    Posts
    75

    Re: Render OpenGL to a pbuffer and then to a bitmap

    Thank you for the explanation.

    I am using OpenGL to process some images feeding it with the images coming from the camera as texture, so I need the resulting image rather than a render to the screen.

    This helps me a lot. I have some clear things on my mind now.

  5. #5
    Super Contributor
    Join Date
    Mar 2004
    Location
    Czech Republic
    Posts
    2,037

    Re: Render OpenGL to a pbuffer and then to a bitmap

    Hi Flame,

    seems that you have great knowledge about OpenGL. We have a discussion about VFP and how to use. The conclusion was that it is probably accessible only via OpenGL API (or assembler).

    The thread is here:
    http://discussion.forum.nokia.com/fo...ad.php?t=97263

    1/ What I was thinking that if the use of OpenGL might be usefull even for 2D games in the means of faster math operations, as we operate with TReal type on many places where we're doing a vector or polygon operations.

    2/ Second question is if OpenGL could make also the redrawing faster - at the momenbt we are using DSA and we access the LCD framebuffer directly.

    I'm asking because I have no experience with OpenGL

    Thanks for respons
    King regards,
    STeN

  6. #6
    Registered User
    Join Date
    Dec 2006
    Posts
    2,280

    Re: Render OpenGL to a pbuffer and then to a bitmap

    STeN,

    Have you checked the (rather excellent) last post in that thread you linked. Someone has kindly showed how to use the VFP unit.

    Sorcery

  7. #7
    Nokia Developer Expert
    Join Date
    Mar 2003
    Posts
    68

    Re: Render OpenGL to a pbuffer and then to a bitmap

    VFP does not have anything to do with OpenGL ES as such. It is a feature of the ARM Processor. 3D Core is separate and accelerates the whole 3D pipeline in HW (T&L and Rasterizer).

    It is capable of something like 2Mtriangles/s and around 250Mpixels/s fill from an theoretic benchmark application (nevertheless really running the driver and swapping results to display etc. -> not just theoretic HW numbers).

    So, if you use the fullscreen window surface of EGL and OpenGL ES and can port your 2D render engine to render _efficiently_ on top of 3D core then you can do a lot of 2D stuff.

    Efficiently here means at least the following:

    - collecting your 2D sprite images into a single (or few) big texture(s)

    - minimizing 3D draw operations by submitting all 2D ops in a single go
    (you can do this if you don't need to change OpenGL ES state.. e.g.,
    enable/disable blending or change active texture.. this is the reason
    for collecting the sprites into a single big texture "atlas".. also
    changing e.g. matrix per sprite would kill the performance totally)

    - using window surfaces

    - not reading back ANY of the render results from 3D pipe (they just go
    to screen and that's it)

    - not doing texture uploads _per frame_ (e.g., for trying to mix 2D
    rendered stuff with 3D)

    If you can do all of that, you could in theory submit rendering of thousands of sprites in a single draw call. This would make the GFX HW process all of the stuff and free CPU almost totally for other tasks.

    If you calculate from the theoretical fill and assume, e.g., 64x64 sprite size you could in theory blit something like 2000 sprites per frame at 30fps with the graphics accelerator (now assuming that blend is not enabled). With blending enabled it would be something around 330 sprites/frame.

    So... it is very capable if you access it properly. If designed from scratch, a 2D-on-top-of-3D engine could look something like:

    1. initialize all sprites to textures
    2. setup modelview+projection+viewport so that 1:1 pixel mapping can
    be achieved and that vertex coordinates map to screen coordinates
    3. initialize "active sprites" list (separate list)
    4. initialize texture coordinate array (but, put no data here yet)
    5. initialize vertex coordinate array (no data here yet)
    6. add initial sprites to active sprites list AND the texture coord and
    vertex coord arrays

    vertex coord array keeps the coordinates for the corner points of 2
    triangles used to render the sprite quad. texture coord array keeps the
    index to the texture atlas texture for that sprite.

    in the render loop:

    7. for each sprite S in the sprite list:
    8. update screen position (& size?) to the vertex coord array
    9. glDrawElements() or glDrawArrays() (render all sprites at once)

    10. if sprites added or deleted
    11. make changes to the sprite list
    12. generate new texture coord and vertex coord arrays
    13. loop -> 7

    (note: you can just modify the vertex arrays directly without
    reinitialization of the arrays.. when you have changes like
    sprites removed or added you may need to regenerate.. alternatively
    you can always just generate the arrays fully from the sprite list..)

    Note that the performance figures here make sense for HW based 3D accelerator. When you have SW based 3D you can reach better performance with your own custom blitters or just using the BITGDI/bitmaps (they run pretty fast nowadays).

    HW accelerator is present currently in: N93, N93i, N95 variants (normal, 8GB,
    etc.), N82, E90.

    If you are not familiar with OpenGL / OpenGL ES, check the resources in the Forum Nokia and Khronos site (www.khronos.org). Also generic OpenGL books/tutorials may be of help. Also there is a new book on the subject called "Mobile 3D Graphics with OpenGL ES and M3G" that might be useful:

    http://www.elsevier.com/wps/find/boo...62/description

    **Sorry** for shameless advertisement (I'm one of the authors of that book), but I think that book has some good insights into the mobile 3D and it provides also a reference to the APIs (OpenGL ES and the Mobile 3D Java API called M3G) with some example code.

    Jani Vaarala
    Graphics Architect
    Nokia

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
×