Video: http://www.youtube.com/watch?v=hkw51ObXnM8&feature=youtu.be
Paper: https://docs.google.com/file/d/0B6UWwRq95mireGltUXdyTHBFRUk/edit
Repos: https://bitbucket.org/sebby_man/gameengine2011 and https://github.com/sebby64/Mass-Occlusion-Culling
Wednesday, April 25, 2012
Monday, April 23, 2012
Mass Instancing + OIT
A couple weeks ago I set out to do a project where a scene of millions of objects could be rendered fast using OpenGL and OpenCL performing frustum culling. This is a screenshot of a fraction of the world: only 169,112 out of 20,000,000 instances are drawn.
Tonight I decided to expand this project to include Order Independent Transparency. The program is the same as before, but every object now has an alpha of 0.5. Click the image to see it in full detail.
As I expected, the frame rate went WAY down. Before transparency I had roughly 118 fps, and now with transparency it's at 33 fps. Though I may not ever achieve 60 fps, I believe that it is possible and I will try to do it.
(Edit - added a better picture)
Sunday, April 22, 2012
Long time, no blog
I've been a combination of very busy and very lazy lately, but now I'm back in work mode with some new results:
Finally something a little more interesting than the cube. This alien model is 14,192 triangles and the frame rate is still well above 60 fps (hovers around 650, though I shouldn't fool myself yet, there are lots of improvements I can still make).
I have fixed some of the artifacts in my previous posts. The depth issue where the transparent stuff is always above the rest of the scene is gone now that I render the opaque scene before I render the transparent objects. Along with this, I use a very handy GLSL command layout(early_fragment_tests) in; to discard transparent fragments that fail the depth test before they ever reach the fragment shader. This ensures that transparent fragments that are behind opaque fragments are never processed. Finally I disallowed the transparent pass to write to the depth buffer because otherwise transparent fragments would fail the depth test against other transparent fragments.
There are still a couple of things I would like to do in the next 2 days. First, I still have not coded the linearized approach to OIT (I'm doing the linked list version). Second, I want to put transparency into a mass-instancing program that I did a couple weeks ago. This is where OIT would really shine.
Finally something a little more interesting than the cube. This alien model is 14,192 triangles and the frame rate is still well above 60 fps (hovers around 650, though I shouldn't fool myself yet, there are lots of improvements I can still make).
I have fixed some of the artifacts in my previous posts. The depth issue where the transparent stuff is always above the rest of the scene is gone now that I render the opaque scene before I render the transparent objects. Along with this, I use a very handy GLSL command layout(early_fragment_tests) in; to discard transparent fragments that fail the depth test before they ever reach the fragment shader. This ensures that transparent fragments that are behind opaque fragments are never processed. Finally I disallowed the transparent pass to write to the depth buffer because otherwise transparent fragments would fail the depth test against other transparent fragments.
There are still a couple of things I would like to do in the next 2 days. First, I still have not coded the linearized approach to OIT (I'm doing the linked list version). Second, I want to put transparency into a mass-instancing program that I did a couple weeks ago. This is where OIT would really shine.
Monday, April 9, 2012
Fixed Blending
After a bit of a hiatus, I'm back to working on transparency.
The first thing I did was fix the blending issues I was having. You can see the difference below:
No blending:
Blending:
Before resolving the transparency with the full-screen quad, I enable blending. My blending function and equation are:
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This achieves the effect where final color = alpha*source + (1-alpha)*destination
The first thing I did was fix the blending issues I was having. You can see the difference below:
No blending:
Blending:
Before resolving the transparency with the full-screen quad, I enable blending. My blending function and equation are:
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This achieves the effect where final color = alpha*source + (1-alpha)*destination
Sunday, April 1, 2012
Midpoint Presentation
Here are the slides for my midpoint presentation:
https://docs.google.com/presentation/d/18dxFixDaYxpC9z8emYd5iGgoB1dbVPkVi8sDhs6AOcs/edit
https://docs.google.com/presentation/d/18dxFixDaYxpC9z8emYd5iGgoB1dbVPkVi8sDhs6AOcs/edit
Monday, March 26, 2012
Details of my first result post
So I think it's time to explain the process that went into below image:
First Pass: "render" the transparent triangles. The word render is in quotation marks because nothing is actually being drawn to the screen in this step, only stored in a global array to be processed later. I use the keyword discard to discard any writes to the framebuffer.
Initially I made the mistake of backing the global data with a 1D texture. Bad Idea. I found out that my graphics card only supports texture sizes of 16384 which is not enough when potentially millions of fragments need to be stored - and yes even with the simple purple cube. For example, when the front face of the cube covers the entire screen 800x800 screen, you need to store 640,000 fragment. Then add the back face. Now its 1,280,000 fragments. So I switched to a texture buffer instead. These are a nice hybrid between buffer objects and textures where I can still use imageLoad and imageStore, but I have 134,217,728 texels of space.
As an implementation detail, my global data array is composed of uvec4's and packs data accordingly:
uvec4 globalData;
globalData.x = prevHeadIndex;
globalData.y = floatBitsToUint(depth);
globalData.z = packUnorm2x16(finalColor.xy);
globalData.w = packUnorm2x16(finalColor.zw);
If anything, prevHeadIndex is the most deserving of its 32 bits because of how large the global data array is (remember, millions.. a short would not cut it)
Second Pass: resolve the transparency data. At this point there is a filled global array and a 2D screen-sized array of linked list heads. The goal here is for each pixel on the screen to get its linked list head, and then backtrack through the global data to form a list of all the fragments that had been rasterized to this pixel, i.e. all the transparent fragments of this pixel. Then once you have a list of fragments that occupy this pixel cell, you sort (I used a bubble sort, but I will try others) from farthest depth to closest depth. Then loop over the sorted array and blend each fragment with the ones before it to produce a final color.
So if the screen is 800x800, there are 640,000 pixels that need to resolve their transparency (well, in the naive approach at least). This is done by rendering two triangles that cover the entire screen. Because they cover the entire screen, one fragment is rasterized per pixel - exactly enough. This is just a means to an end of running the transparency resolver shader program on every pixel.
Yes, the approach is a bit hacky, but it's relatively simple when these triangles are defined straight in clip space coordinates, making the Vertex Shader purely passthrough:
First Pass: "render" the transparent triangles. The word render is in quotation marks because nothing is actually being drawn to the screen in this step, only stored in a global array to be processed later. I use the keyword discard to discard any writes to the framebuffer.
Initially I made the mistake of backing the global data with a 1D texture. Bad Idea. I found out that my graphics card only supports texture sizes of 16384 which is not enough when potentially millions of fragments need to be stored - and yes even with the simple purple cube. For example, when the front face of the cube covers the entire screen 800x800 screen, you need to store 640,000 fragment. Then add the back face. Now its 1,280,000 fragments. So I switched to a texture buffer instead. These are a nice hybrid between buffer objects and textures where I can still use imageLoad and imageStore, but I have 134,217,728 texels of space.
As an implementation detail, my global data array is composed of uvec4's and packs data accordingly:
uvec4 globalData;
globalData.x = prevHeadIndex;
globalData.y = floatBitsToUint(depth);
globalData.z = packUnorm2x16(finalColor.xy);
globalData.w = packUnorm2x16(finalColor.zw);
If anything, prevHeadIndex is the most deserving of its 32 bits because of how large the global data array is (remember, millions.. a short would not cut it)
Second Pass: resolve the transparency data. At this point there is a filled global array and a 2D screen-sized array of linked list heads. The goal here is for each pixel on the screen to get its linked list head, and then backtrack through the global data to form a list of all the fragments that had been rasterized to this pixel, i.e. all the transparent fragments of this pixel. Then once you have a list of fragments that occupy this pixel cell, you sort (I used a bubble sort, but I will try others) from farthest depth to closest depth. Then loop over the sorted array and blend each fragment with the ones before it to produce a final color.
So if the screen is 800x800, there are 640,000 pixels that need to resolve their transparency (well, in the naive approach at least). This is done by rendering two triangles that cover the entire screen. Because they cover the entire screen, one fragment is rasterized per pixel - exactly enough. This is just a means to an end of running the transparency resolver shader program on every pixel.
Yes, the approach is a bit hacky, but it's relatively simple when these triangles are defined straight in clip space coordinates, making the Vertex Shader purely passthrough:
#version 420 core
layout(location = 0) in vec4 position;
void main()
{
gl_Position = position;
}
Known bugs: the transparent shapes are not blending with the background and always appear above the background. I figure these are pretty simple fixes.
First Results!
It's 7:39 in the morning... and no, I did not wake up early. But I finally have results for order independent transparency, even if its just a simple purple cube.
I will sum up what I did once I am less tired.
Thursday, March 22, 2012
Background Info
After reading two good papers on this subject (previously mentioned OpenGL Insights chapter and Order-Independent Transparency in DirectX11), I am here to summarize what I learned.
The core idea behind order independent transparency is that we need to store all the transparent fragments somewhere. ALL the transparent fragments. This means turning off the depth test to allow every fragment to be processed. With all this data, the first approach is to keep a per-pixel linked list. The second approach is to pack the data sequentially and store offset pointers for each pixel.
I decided that I would implement the linked list approach first, as it seems like a nicer algorithm. Essentially each pixel cell needs to store a pointer to the head of its linked list. This indexes into a global memory array which stores all the fragments that have been processed. Besides storing color and depth, each member of the global memory array also stores an index to the next node in the linked list.
By the end of processing, each pixel on the screen has a corresponding linked list. I drew this diagram to explain how the algorithm works. It requires an atomic counter for node_count (to avoid a slew of memory conflict issues), a 2D uint texture for heads, a 1D vec4 buffer texture for color data, a 1D uint buffer texture for next. The missing value is depth, which may need its own similar texture. The image load, store, and exchange functions of OpenGL 4 are suitable for writing/reading/exchanging data from these textures.
When this procedure is over, each fragment loops over its linked list, sorts the fragments in back to front order, and blends them with the current pixel color (which may contain colors from the opaque objects in the scene).
As for sorting, I plan on using a bubble sort. This is a very easy algorithm to write for a fragment shader and is suitable for the smallish input size (maybe maximum of 30 overlapping transparent fragments?).
There are tons of optimization details that I haven't mentioned, and probably won't implement until I get the basic thing working.
I'm hoping that I'll have something more to show by tomorrow.
The core idea behind order independent transparency is that we need to store all the transparent fragments somewhere. ALL the transparent fragments. This means turning off the depth test to allow every fragment to be processed. With all this data, the first approach is to keep a per-pixel linked list. The second approach is to pack the data sequentially and store offset pointers for each pixel.
I decided that I would implement the linked list approach first, as it seems like a nicer algorithm. Essentially each pixel cell needs to store a pointer to the head of its linked list. This indexes into a global memory array which stores all the fragments that have been processed. Besides storing color and depth, each member of the global memory array also stores an index to the next node in the linked list.
By the end of processing, each pixel on the screen has a corresponding linked list. I drew this diagram to explain how the algorithm works. It requires an atomic counter for node_count (to avoid a slew of memory conflict issues), a 2D uint texture for heads, a 1D vec4 buffer texture for color data, a 1D uint buffer texture for next. The missing value is depth, which may need its own similar texture. The image load, store, and exchange functions of OpenGL 4 are suitable for writing/reading/exchanging data from these textures.
When this procedure is over, each fragment loops over its linked list, sorts the fragments in back to front order, and blends them with the current pixel color (which may contain colors from the opaque objects in the scene).
As for sorting, I plan on using a bubble sort. This is a very easy algorithm to write for a fragment shader and is suitable for the smallish input size (maybe maximum of 30 overlapping transparent fragments?).
There are tons of optimization details that I haven't mentioned, and probably won't implement until I get the basic thing working.
I'm hoping that I'll have something more to show by tomorrow.
Monday, March 12, 2012
Single Pass Order Independent Transparency - the beginning
Transparency is an important visual technique in real time rendering - think water simulations or tinted car windows in games. Unfortunately, one of the main limitations of the Z-buffer is its ability to handle transparency. The Z-buffer stores just the closest fragment for each pixel cell. This approach works fine for opaque scenes as there is no need to remember farther away fragments if we cannot see through the closest fragment. But when transparent objects are present in the scene, the final color of the pixel is dependent on all the transparent fragments. The Z-buffer does not give us the storage space for that.
There are several approaches to cutting around the transparency problem. One method is to sort the transparent surfaces from back to front and render after the opaque object pass. This can be both slow and hazardous. Another method, called depth peeling, avoids the sorting problem but requires multiple rendering passes. If there are many transparent layers, this can become quite slow.
With advances in DirectX 11 and OpenGL 4, single-pass OIT is now possible. ATI has presented a solution involving atomic operations and append buffers where each pixel stores a linked list of its transparent fragments. The image below is a screenshot from a real time demo using Direct X 11.
I hope to implement single-pass order independent transparency in OpenGL and C++, specifically using the atomic operations introduced in OpenGL 4.2. One of my key resources will be the first chapter of Patrick's book OpenGL Insights. The chapter is called Efficient Layered Fragment Buffer Techniques and is written by Pyarelal Knowles, Geo Leach, and Fabio Zambetta. One nice thing is that they tested their implementation on a GTX 460 which is the same card I have.
Their explanation is concise and detailed. While their primary target is transparency, their approach is general enough that it may be applied to other rendering techniques involving multi-fragment storage per pixel. I feel like there is a LOT on the horizon for atomic buffer reading and writing.
Subscribe to:
Posts (Atom)