diff --git a/README.md b/README.md
index bec6ca4..59cb492 100644
--- a/README.md
+++ b/README.md
@@ -3,13 +3,44 @@ Vulkan Flocking: compute and shading in one pipeline!
**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 6**
-* (TODO) YOUR NAME HERE
- Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
+* Michael Willett
+* Tested on: Windows 10, I5-4690k @ 3.50GHz 16.00GB, GTX 750-TI 2GB (Personal Computer)
- ### (TODO: Your README)
- Include screenshots, analysis, etc. (Remember, this is public, so don't put
- anything here that you don't want to share with the world.)
+
+
+
+### Discussion
+
+*Why do you think Vulkan expects explicit descriptors for things like generating pipelines and commands? HINT: this may relate to something in the comments about some components using pre-allocated GPU memory.*
+
+Vulkan will actively try to minimize the amount of memory required for the command buffers. Since it will distribute pointers for command buffers from a preallocated memory pool, it needs to know the maximum
+size a single command buffer could be, otherwise there is risk of overwriting data at the end memory referenced by the pointer.
+
+
+*Describe a situation besides flip-flop buffers in which you may need multiple descriptor sets to fit one descriptor layout.*
+
+In the context of a compute shader, the easiest example is a shader than can be generalized across multiple object types. Maybe the compute shader performs a single algorithmic update on each object such as
+labeling each object into a bin. The developer could create multiple descriptor sets to separate one group of objects into 10 bins, and another descriptor set to group another collection of objects into 100 bins.
+
+
+*What are some problems to keep in mind when using multiple Vulkan queues?*
+
+* take into consideration that different queues may be backed by different hardware
+* take into consideration that the same buffer may be used across multiple queues
+
+Read/write conflicts of memory is clearly one of the most important. Within a single queue, memory barriers and fences can help ensure that data has fully been processed and updated by a shader before another
+bit of code is run. However, between multiple queues, there are no structures available to provide the same guarantees. Synchronization is very important if the two queues will ever need to share data.
+
+Additionally, if two queues are located on different hardware (for example, identical GPUs in the same machine), then they could make calls to the same memory addresses, however, the required data may only exist
+on one GPU. Keeping track of where the data is again another chore for the developer to implement.
+
+
+*What is one advantage of using compute commands that can share data with a rendering pipeline?*
+
+The biggest advantage is there is not additional memory manipulation or management. If the rendering pipeline can read the state of various objects without needing additional pixel buffers to prevent read/write
+conflicts while the compute shader is running.
+
### Credits
diff --git a/base/vulkanexamplebase.h b/base/vulkanexamplebase.h
index a30387e..fc59fa8 100644
--- a/base/vulkanexamplebase.h
+++ b/base/vulkanexamplebase.h
@@ -50,7 +50,7 @@ class VulkanExampleBase
bool enableVSync = false;
// Device features enabled by the example
// If not set, no additional features are enabled (may result in validation layer errors)
- VkPhysicalDeviceFeatures enabledFeatures = {};
+ VkPhysicalDeviceFeatures enabledFeatures;
// fps timer (one second interval)
float fpsTimer = 0.0f;
// Create application wide Vulkan instance
diff --git a/data/shaders/computeparticles/particle.comp b/data/shaders/computeparticles/particle.comp
index b7dc2f7..b95590e 100644
--- a/data/shaders/computeparticles/particle.comp
+++ b/data/shaders/computeparticles/particle.comp
@@ -41,12 +41,69 @@ layout (binding = 2) uniform UBO
int particleCount;
} ubo;
+
+layout(std140, binding = 3) buffer GridIdx
+{
+ uint gridIdx[ ];
+};
+
+vec2 center, separate, cohesion;
+
+float computeVelocityChangePair(uint idx1, uint idx2)
+{
+ Particle p1 = particlesA[idx1];
+ Particle p2 = particlesA[idx2];
+ float n = 0;
+ vec2 dv;
+
+ // handle screen wrapping
+ vec2 delta = p2.pos - p1.pos;
+ float dist = sqrt(dot(delta, delta));
+
+ // RULE 1: Move to center of mass
+ if (dist < ubo.rule1Distance) {
+ center += p2.pos.xy;
+ n++;
+ }
+
+ // RULE 2: Maintain minimum distance from neighbors
+ if (dist < ubo.rule2Distance) {
+ separate -= delta.xy;
+ }
+
+ // RULE 3: Align Velocities
+ if (dist < ubo.rule3Distance) {
+ cohesion += p2.vel.xy;
+ }
+
+ return n;
+}
+
+vec2 computeVelocityChange(uint iSelf) {
+
+ vec2 dv = vec2(0.0);
+ float nBoids = 0;
+
+ for (uint i = 0; i < ubo.particleCount; i++) {
+ if (i != iSelf) {
+ nBoids += computeVelocityChangePair(iSelf, i);
+ }
+ }
+
+ if (nBoids > 0) {
+ center /= nBoids;
+ dv = (center - particlesA[iSelf].pos) * ubo.rule1Scale + cohesion * ubo.rule3Scale + separate * ubo.rule2Scale;
+ }
+
+ return dv;
+}
+
void main()
{
- // LOOK: This is very similar to a CUDA kernel.
- // Right now, the compute shader only advects the particles with their
- // velocity and handles wrap-around.
- // TODO: implement flocking behavior.
+ // LOOK: This is very similar to a CUDA kernel.
+ // Right now, the compute shader only advects the particles with their
+ // velocity and handles wrap-around.
+ // TODO: implement flocking behavior.
// Current SSBO index
uint index = gl_GlobalInvocationID.x;
@@ -54,21 +111,26 @@ void main()
if (index >= ubo.particleCount)
return;
+ // reset params
+ center = vec2(0.0f, 0.0f);
+ separate = vec2(0.0f, 0.0f);
+ cohesion = vec2(0.0f, 0.0f);
+
// Read position and velocity
- vec2 vPos = particlesA[index].pos.xy;
- vec2 vVel = particlesA[index].vel.xy;
+ vec2 vPos = particlesA[index].pos.xy;
+ vec2 vVel = particlesA[index].vel.xy + computeVelocityChange(index);
- // clamp velocity for a more pleasing simulation.
- vVel = normalize(vVel) * clamp(length(vVel), 0.0, 0.1);
+ // clamp velocity for a more pleasing simulation.
+ vVel = normalize(vVel) * clamp(length(vVel), 0.0, 0.1);
- // kinematic update
- vPos += vVel * ubo.deltaT;
+ // kinematic update
+ vPos += vVel * ubo.deltaT;
// Wrap around boundary
- if (vPos.x < -1.0) vPos.x = 1.0;
- if (vPos.x > 1.0) vPos.x = -1.0;
- if (vPos.y < -1.0) vPos.y = 1.0;
- if (vPos.y > 1.0) vPos.y = -1.0;
+ if (vPos.x < -1.0) vPos.x = 1.0;
+ if (vPos.x > 1.0) vPos.x = -1.0;
+ if (vPos.y < -1.0) vPos.y = 1.0;
+ if (vPos.y > 1.0) vPos.y = -1.0;
particlesB[index].pos.xy = vPos;
diff --git a/data/shaders/computeparticles/particle.comp.spv b/data/shaders/computeparticles/particle.comp.spv
index 059ab59..1a0a05c 100644
Binary files a/data/shaders/computeparticles/particle.comp.spv and b/data/shaders/computeparticles/particle.comp.spv differ
diff --git a/images/running.gif b/images/running.gif
new file mode 100644
index 0000000..5ce3b9c
Binary files /dev/null and b/images/running.gif differ
diff --git a/vulkanBoids/vulkanBoids.cpp b/vulkanBoids/vulkanBoids.cpp
index 9b2f122..687a15e 100644
--- a/vulkanBoids/vulkanBoids.cpp
+++ b/vulkanBoids/vulkanBoids.cpp
@@ -33,11 +33,11 @@
// LOOK: constants for the boids algorithm. These will be passed to the GPU compute part of the assignment
// using a Uniform Buffer. These parameters should yield a stable and pleasing simulation for an
// implementation based off the code here: http://studio.sketchpad.cc/sp/pad/view/ro.9cbgCRcgbPOI6/rev.23
-#define RULE1DISTANCE 0.1f // cohesion
-#define RULE2DISTANCE 0.05f // separation
-#define RULE3DISTANCE 0.05f // alignment
-#define RULE1SCALE 0.02f
-#define RULE2SCALE 0.05f
+#define RULE1DISTANCE 0.08f // cohesion
+#define RULE2DISTANCE 0.03f // separation
+#define RULE3DISTANCE 0.08f // alignment
+#define RULE1SCALE 0.001f
+#define RULE2SCALE 0.01f
#define RULE3SCALE 0.01f
class VulkanExample : public VulkanExampleBase
@@ -73,6 +73,7 @@ class VulkanExample : public VulkanExampleBase
struct {
vk::Buffer storageBufferA; // (Shader) storage buffer object containing the particles
vk::Buffer storageBufferB; // (Shader) storage buffer object containing the particles
+ vk::Buffer gridIdxBuffer; // (Shader) contains grid index of current particle state
vk::Buffer uniformBuffer; // Uniform buffer object containing particle system parameters
VkQueue queue; // Separate queue for compute commands (queue family may differ from the one used for graphics)
@@ -121,6 +122,7 @@ class VulkanExample : public VulkanExampleBase
// Compute
compute.storageBufferA.destroy();
compute.storageBufferB.destroy();
+ compute.gridIdxBuffer.destroy();
compute.uniformBuffer.destroy();
vkDestroyPipelineLayout(device, compute.pipelineLayout, nullptr);
@@ -151,13 +153,13 @@ class VulkanExample : public VulkanExampleBase
std::mt19937 rGenerator;
std::uniform_real_distribution rDistribution(-1.0f, 1.0f);
-
// Initial particle positions
std::vector particleBuffer(PARTICLE_COUNT);
for (auto& particle : particleBuffer)
{
particle.pos = glm::vec2(rDistribution(rGenerator), rDistribution(rGenerator));
// TODO: add randomized velocities with a slight scale here, something like 0.1f.
+ particle.vel = glm::vec2(rDistribution(rGenerator), rDistribution(rGenerator)) * 0.1f;
}
VkDeviceSize storageBufferSize = particleBuffer.size() * sizeof(Particle);
@@ -244,7 +246,7 @@ class VulkanExample : public VulkanExampleBase
VERTEX_BUFFER_BIND_ID,
1,
VK_FORMAT_R32G32_SFLOAT,
- offsetof(Particle, pos)); // TODO: change this so that we can color the particles based on velocity.
+ offsetof(Particle, vel)); // TODO: change this so that we can color the particles based on velocity.
// vertices.inputState encapsulates everything we need for these particular buffers to
// interface with the graphics pipeline.
@@ -540,13 +542,34 @@ class VulkanExample : public VulkanExampleBase
compute.descriptorSets[0],
VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER,
2,
- &compute.uniformBuffer.descriptor)
+ &compute.uniformBuffer.descriptor),
// TODO: write the second descriptorSet, using the top for reference.
// We want the descriptorSets to be used for flip-flopping:
// on one frame, we use one descriptorSet with the compute pass,
// on the next frame, we use the other.
// What has to be different about how the second descriptorSet is written here?
+ // Binding 0 : Particle position storage buffer
+
+ vkTools::initializers::writeDescriptorSet(
+ compute.descriptorSets[1], // LOOK: which descriptor set to write to?
+ VK_DESCRIPTOR_TYPE_STORAGE_BUFFER,
+ 0, // LOOK: which binding in the descriptor set Layout?
+ &compute.storageBufferB.descriptor), // LOOK: which SSBO?
+
+ // Binding 1 : Particle position storage buffer
+ vkTools::initializers::writeDescriptorSet(
+ compute.descriptorSets[1],
+ VK_DESCRIPTOR_TYPE_STORAGE_BUFFER,
+ 1,
+ &compute.storageBufferA.descriptor),
+
+ // Binding 2 : Uniform buffer
+ vkTools::initializers::writeDescriptorSet(
+ compute.descriptorSets[1],
+ VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER,
+ 2,
+ &compute.uniformBuffer.descriptor)
};
vkUpdateDescriptorSets(device, static_cast(computeWriteDescriptorSets.size()), computeWriteDescriptorSets.data(), 0, NULL);
@@ -590,6 +613,7 @@ class VulkanExample : public VulkanExampleBase
// We also want to flip what SSBO we draw with in the next
// pass through the graphics pipeline.
// Feel free to use std::swap here. You should need it twice.
+ std::swap(compute.descriptorSets[0], compute.descriptorSets[1]);
}
// Record command buffers for drawing using the graphics pipeline