opengl - GLSL Render to Texture not working -
i'm trying compute pass render texture used in draw pass later on. initial implementation based on shader storage buffer objects , working nicely. want apply computation method going take advantage of blend hardware of gpu started porting ssbo implementation rtt one. unfortunately code has stopped working. when read texture getting wrong values.
here texture , frame buffer setup code:
glgenframebuffers(1, &m_fbo); glbindframebuffer(gl_framebuffer, m_fbo); // create render textures glgentextures(num_tex_outputs, m_rendertexs); m_texsize = square_approximation(m_numvertices); cout << "textures size: " << glm::to_string(m_texsize) << endl; glenum drawbuffers[num_tex_outputs]; (int = 0 ; < num_tex_outputs; ++i) { glbindtexture(gl_texture_2d, m_rendertexs[i]); // 1st 0: level, 2nd 0: no border, 3rd 0: no initial data glteximage2d(gl_texture_2d, 0, gl_rgba, m_texsize.x, m_texsize.y, 0, gl_rgba, gl_float, 0); // xxx: need this? // poor filtering. needed ! gltexparameteri(gl_texture_2d, gl_texture_mag_filter, gl_nearest); gltexparameteri(gl_texture_2d, gl_texture_min_filter, gl_nearest); glbindtexture(gl_texture_2d, 0); // 0: level glframebuffertexture2d(gl_framebuffer, gl_color_attachment0 + i, gl_texture_2d, m_rendertexs[i], 0); drawbuffers[i] = gl_color_attachment0 + i; } gldrawbuffers(num_tex_outputs, drawbuffers); if (glcheckframebufferstatus(gl_framebuffer) != gl_framebuffer_complete) { cout << "error when setting frame buffer" << endl; // throw exception? } glbindframebuffer(gl_framebuffer, 0);
and code start compute pass:
m_shaderprogram.use(); // setup opengl glpolygonmode(gl_front_and_back, gl_line); gldisable(gl_cull_face); gldisable(gl_depth_test); glviewport(0, 0, m_texsize.x, m_texsize.y); // setup viewport (equal textures size) // make single patch have vertex, bases , neighbours glpatchparameteri(gl_patch_vertices, m_maxneighbours + 5); // wait writes shader storage finish glmemorybarrier(gl_shader_storage_barrier_bit); gluniform1i(m_shaderprogram.getuniformlocation("curvtex"), m_rendertexs[2]); gluniform2i(m_shaderprogram.getuniformlocation("size"), m_texsize.x, m_texsize.y); gluniform2f(m_shaderprogram.getuniformlocation("vertexstep"), (umax - umin)/divisoes, (vmax-vmin)/divisoes); // bind buffers glbindframebuffer(gl_framebuffer, m_fbo); glbindbuffer(gl_array_buffer, m_vbo); glbindbuffer(gl_element_array_buffer, m_ibo); glbindbufferbase(gl_uniform_buffer, m_mvp_location, m_mvp_ubo); // make textures active (int = 0; < num_tex_outputs; ++i) { glactivetexture(gl_texture0 + i); glbindtexture(gl_texture_2d, m_rendertexs[i]); } // no need pass index array 'cause ibo bound gldrawelements(gl_patches, m_numelements, gl_unsigned_int, 0);
i read textures using following:
bool readtex(gluint tex, void *dest) { glbindtexture(gl_texture_2d, tex); glgetteximage(gl_texture_2d, 0, gl_rgba, gl_float, dest); glbindtexture(gl_texture_2d, 0); // todo: check glgetteximage return values error return true; } (int = 0; < num_tex_outputs; ++i) { if (m_tensors[i] == null) { m_tensors[i] = new glm::vec4[m_texsize.x*m_texsize.y]; } memset(m_tensors[i], 0, m_texsize.x*m_texsize.y*sizeof(glm::vec4)); readtex(m_rendertexs[i], m_tensors[i]); }
finally, fragment shader code is:
#version 430 #extension gl_arb_shader_storage_buffer_object: require layout(pixel_center_integer) in vec4 gl_fragcoord; layout(std140, binding=6) buffer evalbuffer { vec4 evaldebug[]; }; uniform ivec2 size; in tedata { vec4 _a; vec4 _b; vec4 _c; vec4 _d; vec4 _e; }; layout(location = 0) out vec4 a; layout(location = 1) out vec4 b; layout(location = 2) out vec4 c; layout(location = 3) out vec4 d; layout(location = 4) out vec4 e; void main() { a= _a; b= _b; c= _c; d= _d; e= _e; evaldebug[gl_primitiveid] = gl_fragcoord; }
the fragment coordinates correct (each fragment pointing x,y coordinate in texture), input values (_a _e), not see them outputted correctly textures when reading back. tried accessing texture in shader see if read-back error, debug ssbo returned zeroes.
am missing setup step? i've tested both on linux , windows (titan , 540m geforces) , i'm using opengl 4.3.
as derhass pointed out in comments above, problem texture format. assumed passing gl_float data type use 32bit floats each of rgba channels. not so. derhass said, data type parameter here not change texture format. had change internalformat parameter wanted (gl_rgba32f) work expected. so, after changing glteximage2d call to:
glteximage2d(gl_texture_2d, 0, gl_rgba32f, m_texsize.x, m_texsize.y, 0, gl_rgba, gl_float, 0);
i able correctly render results texture , read back. :)
Comments
Post a Comment