We have drawn triangles. But triangles are hardly 3D objects. The simples really 3D thing might be a cube. (Okay, a tetrahedron may be simpler in some sense. But let’s not go for tetrahedra.) Actually, I’m looking forward to spheres even more.
Drawing a box should not be too difficult. We know how to add additional points to our vertex buffer, so we can easily put in a few more triangles and be done.
I’d like to use this as an opportunity to begin organizing how cubes or, more generally, 3D models are represented in our code. (A bit less graphics and Vulkan, this time.)
And I certainly do not want to fill in vertex buffers by hand every time I want a new cube in the scene.
When I think of models, I imagine a Something that in itself can be somewhat complex (definitely more than a box), and of which there are possibly very many (think trees or basic units in an RTS army rather than unique monsters), not all of which are visible at the same time (and we don’t want to waste time drawing what’s far behind ourselves/the camera — or better yet: we not only want to “not draw” it, but don’t even want to tell the GPU that it’s there — it is probably too early to think about this, as few models as we actually have, but hey; it’s not a problem, is it?).
The separation in vertex and instance data (like before; we had [f32;4]
positions per instance and [f32;5]
size and colour per vertex) already does a lot for this. We only should split the instances into a visible and an invisible part.
Perhaps this way?
struct Model {
vertexdata: Vec<[f32; 5]>,
visible_instances: Vec<[f32; 4]>,
invisible_instances: Vec<[f32; 4]>,
vertexbuffer: Buffer,
instancebuffer: Buffer,
}
(I have also included the buffers, which are certainly closely connected to the models.)
Before we go on: I am not very confident that we will keep those [f32;4]
and [f32;5]
. In fact, the variable for point size is already rather
pointless. Let us keep this struct more general.
struct Model<V, I> {
vertexdata: Vec<V>,
visible_instances: Vec<I>,
invisible_instances: Vec<I>,
vertexbuffer: Buffer,
instancebuffer: Buffer,
}
This structure makes it difficult to reference the instances from outside. And at some point we probably want to be able to do this (“This unit has died, we must remove its model.”). Using labels like “visible, number 5” breaks as soon as we switch “visible
number 3” to “invisible” and the elements of visible_instances
are moved.
New idea: We save some external “handles” that we hand out and keep track of and that make referencing possible. Then we can also keep visible and
invisible instances in the same Vec
. (Say, the visible ones first, the invisibles after.)
I call “handle” the externally accessible number and “index” the number needed internally to access the instances
vector.
struct Model<V, I> {
vertexdata: Vec<V>,
handle_to_index: std::collections::HashMap<usize, usize>,
instances: Vec<I>,
first_invisible: usize,
vertexbuffer: Buffer,
instancebuffer: Buffer,
}
Plus some keeping track of handles to use next and of those that have been used already:
struct Model<V, I> {
vertexdata: Vec<V>,
handle_to_index: std::collections::HashMap<usize, usize>,
handles: Vec<usize>,
instances: Vec<I>,
first_invisible: usize,
next_handle: usize,
vertexbuffer: Buffer,
instancebuffer: Buffer,
}
We access the instances by their handles. This indirectness comes with some overhead when accessing single elements but having all (relevant) data right next to each other will be worth it (I hope — I haven’t profiled).
So, accessing elements:
impl<V, I> Model<V, I> {
fn get(&self, handle: usize) -> Option<&I> {
if let Some(&index) = self.handle_to_index.get(&handle) {
self.instances.get(index)
} else {
None
}
}
fn get_mut(&mut self, handle: usize) -> Option<&mut I> {
if let Some(&index) = self.handle_to_index.get(&handle) {
self.instances.get_mut(index)
} else {
None
}
}
}
Let’s make it possible to change the position of two elements in the vector (for example for switching something between the “visible” and the “invisible” part). This requires some bookkeeping. (And we’ll introduce two functions for this purpose, one taking handles, one taking indices — the latter more as an interal helper-function.)
fn swap_by_handle(&mut self, handle1: usize, handle2: usize) -> Result<(), InvalidHandle> {
if handle1 == handle2 {
return Ok(());
}
if let (Some(&index1), Some(&index2)) = (
self.handle_to_index.get(&handle1),
self.handle_to_index.get(&handle2),
) {
self.handles.swap(index1, index2);
self.instances.swap(index1, index2);
self.handle_to_index.insert(index1, handle2);
self.handle_to_index.insert(index2, handle1);
Ok(())
} else {
Err(InvalidHandle)
}
}
fn swap_by_index(&mut self, index1: usize, index2: usize) {
if index1 == index2 {
return;
}
let handle1 = self.handles[index1];
let handle2 = self.handles[index2];
self.handles.swap(index1, index2);
self.instances.swap(index1, index2);
self.handle_to_index.insert(index1, handle2);
self.handle_to_index.insert(index2, handle1);
}
Things that could go wrong: Requesting elements for handles that don’t exist. I’ve chosen to represent this by an InvalidHandle
error.
#[derive(Debug, Clone)]
struct InvalidHandle;
impl std::fmt::Display for InvalidHandle {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(f, "invalid handle")
}
}
impl std::error::Error for InvalidHandle {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
None
}
}
Visibility is checked by checking whether the element lies in the first part of the vector. Making something visible or invisible is sorting it into the corresponding part of the vector.
fn is_visible(&self, handle: usize) -> Result<bool, InvalidHandle> {
if let Some(index) = self.handle_to_index.get(&handle) {
Ok(index < &self.first_invisible)
} else {
Err(InvalidHandle)
}
}
fn make_visible(&mut self, handle: usize) -> Result<(), InvalidHandle> {
//if already visible: do nothing
if let Some(&index) = self.handle_to_index.get(&handle) {
if index < self.first_invisible {
return Ok(());
}
//else: move to position first_invisible and increase value of first_invisible
self.swap_by_index(index, self.first_invisible);
self.first_invisible += 1;
Ok(())
} else {
Err(InvalidHandle)
}
}
fn make_invisible(&mut self, handle: usize) -> Result<(), InvalidHandle> {
//if already invisible: do nothing
if let Some(&index) = self.handle_to_index.get(&handle) {
if index >= self.first_invisible {
return Ok(());
}
//else: move to position before first_invisible and decrease value of first_invisible
self.swap_by_index(index, self.first_invisible - 1);
self.first_invisible -= 1;
Ok(())
} else {
Err(InvalidHandle)
}
}
Insertion (which returns the new handle) and removal (returning the element) have some bookkeeping to do.
fn insert(&mut self, element: I) -> usize {
let handle = self.next_handle;
self.next_handle += 1;
let index = self.instances.len();
self.instances.push(element);
self.handles.push(handle);
self.handle_to_index.insert(handle, index);
handle
}
fn insert_visibly(&mut self, element: I) -> usize {
let new_handle = self.insert(element);
self.make_visible(new_handle).ok();//can't go wrong, see previous line
new_handle
}
fn remove(&mut self, handle: usize) -> Result<I, InvalidHandle> {
if let Some(&index) = self.handle_to_index.get(&handle) {
if index < self.first_invisible {
self.swap_by_index(index, self.first_invisible - 1);
self.first_invisible -= 1;
}
self.swap_by_index(self.first_invisible, self.instances.len() - 1);
self.handles.pop();
self.handle_to_index.remove(&handle);
//must be Some(), otherwise we couldn't have found an index
Ok(self.instances.pop().unwrap())
} else {
Err(InvalidHandle)
}
}
Okay. The buffers next. Maybe it’s a good idea to have them not present from the beginning but to create them from the data on command.
Let them be only Option
s.
struct Model {
vertexdata: Vec<V>,
handle_to_index: std::collections::HashMap<usize, usize>,
handles: Vec<usize>,
instances: Vec<I>,
first_invisible: usize,
next_handle: usize,
vertexbuffer: Option<Buffer>,
instancebuffer: Option<Buffer>,
}
and the creation of a vertexbuffer (no more counting bytes by hand, yay!):
fn update_vertexbuffer(
&mut self,
allocator: &vk_mem::Allocator,
) -> Result<(), vk_mem::error::Error> {
if let Some(buffer) = &self.vertexbuffer {
buffer.fill(allocator, &self.vertexdata)?;
Ok(())
} else {
let bytes = (self.vertexdata.len() * std::mem::size_of::<V>()) as u64;
let buffer = Buffer::new(
&allocator,
bytes,
vk::BufferUsageFlags::VERTEX_BUFFER,
vk_mem::MemoryUsage::CpuToGpu,
)?;
buffer.fill(allocator, &self.vertexdata)?;
self.vertexbuffer = Some(buffer);
Ok(())
}
}
If there is no buffer yet, we create one of the right size. If the buffer already exists, we fill in the new data.
Speaking of “filling in data”: This is the first time we stuff data into a buffer that has not been created immediately previously. Let us have
another look at Buffer::fill
:
fn fill<T: Sized>(
&self,
allocator: &vk_mem::Allocator,
data: &[T],
) -> Result<(), vk_mem::error::Error> {
let data_ptr = allocator.map_memory(&self.allocation)? as *mut T;
unsafe { data_ptr.copy_from_nonoverlapping(data.as_ptr(), data.len()) };
allocator.unmap_memory(&self.allocation)?;
Ok(())
}
Do you see the same problem I see? There is no check at all whether the data we are sending is of the right size or too large. And writing more data than there is space for? That sounds like a recipe for trouble. Let’s not do that.
First question: How do we get the current size of the buffer? Answer: We know it at creation of the buffer. Let’s just save it at that time as another
member of the Buffer
struct. Actually, there is also allocation_info.get_size()
. This gives the size of the underlying memory allocation. (But I’m
not entirely sure how much of an effect the different size from BufferCreateInfo
may have — so let’s play it safe.)
Second question: How do we get the size of data
in bytes? Answer:
We can compute it easily (actually, I used essentially the same in the function creating the vertexbuffer, too):
let bytes_to_write = data.len() * std::mem::size_of::<T>();
Third question: What do we do if there’s too much new data? Ignore the additional data? (Possible, but better not.) Let us instead create a new buffer which is large enough. And in turn destroy the old one. We wouldn’t want to neglect our cleaning duties, would we? (Even if I expect that the following change will probably lead to problems down the line.)
fn fill<T: Sized>(
&mut self,
allocator: &vk_mem::Allocator,
data: &[T],
) -> Result<(), vk_mem::error::Error> {
let bytes_to_write = (data.len() * std::mem::size_of::<T>()) as u64;
if bytes_to_write > self.size_in_bytes {
allocator.destroy_buffer(self.buffer, &self.allocation);
let newbuffer = Buffer::new(
allocator,
bytes_to_write,
self.buffer_usage,
self.memory_usage,
)?;
*self = newbuffer;
}
let data_ptr = allocator.map_memory(&self.allocation)? as *mut T;
unsafe { data_ptr.copy_from_nonoverlapping(data.as_ptr(), data.len()) };
allocator.unmap_memory(&self.allocation)?;
Ok(())
}
Creating a new buffer means we have to know about buffer and memory usage — I’ll add them as fields to Buffer
. The changes in this function also
mean that it now takes &mut self
instead of &self
. (The compiler tells us where to change the other code as result of this.)
Okay, back to the Model
. Let us also add some update_instancebuffer
function.
fn update_instancebuffer(
&mut self,
allocator: &vk_mem::Allocator,
) -> Result<(), vk_mem::error::Error> {
if let Some(buffer) = &mut self.instancebuffer {
buffer.fill(allocator, &self.instances[0..self.first_invisible])?;
Ok(())
} else {
let bytes = (self.first_invisible * std::mem::size_of::<I>()) as u64;
let mut buffer = Buffer::new(
&allocator,
bytes,
vk::BufferUsageFlags::VERTEX_BUFFER,
vk_mem::MemoryUsage::CpuToGpu,
)?;
buffer.fill(allocator, &self.instances[0..self.first_invisible])?;
self.instancebuffer = Some(buffer);
Ok(())
}
}
We shall also make the use of these buffers, the drawing, a function of Model
(which then, of course, needs logical device and commandbuffer):
fn draw(&self, logical_device: &ash::Device, commandbuffer: vk::CommandBuffer) {
if let Some(vertexbuffer) = &self.vertexbuffer {
if let Some(instancebuffer) = &self.instancebuffer {
if self.first_invisible > 0 {
unsafe {
logical_device.cmd_bind_vertex_buffers(
commandbuffer,
0,
&[vertexbuffer.buffer],
&[0],
);
logical_device.cmd_bind_vertex_buffers(
commandbuffer,
1,
&[instancebuffer.buffer],
&[0],
);
logical_device.cmd_draw(
commandbuffer,
self.vertexdata.len() as u32,
self.first_invisible as u32,
0,
0,
);
}
}
}
}
}
In fill_commandbuffers
, we can replace the bindings of vertex buffers and cmd_draw
by calls to this function. Let us assume, we’re passing in a
slice of Model
s:
for m in models {
m.draw(logical_device, commandbuffer);
}
We should also decide what data to use for the vertices and instances.
Per vertex: position, definitely.
Per instance: a global position offset. Also the colour? (That would mean single-coloured cubes; sounds okay for me.)
The point size? Neither. Let’s remove it.
How do we store the position? Still as [f32; 4]
? I mean, we’re not using the fourth component for anything. Let’s stick with [f32; 3]
and fill in
an automatic 1 in the last slot.
For the colour, I think, we can also ignore the last component. We will have to talk about transparency and alpha on a separate occasion.
In the shader:
#version 450
layout (location=0) in vec3 position;
layout (location=1) in vec3 position_offset;
layout (location=2) in vec3 colour;
layout (location=0) out vec4 colourdata_for_the_fragmentshader;
void main() {
gl_Position = vec4(position+position_offset,1.0);
colourdata_for_the_fragmentshader=vec4(colour,1.0);
}
(Also note how we are using vec4
to make a vector with four components out of a vec3
and another float
, and how we can already perform
computations with the vec3
s before we do so.)
In terms of attribute and vertex binding descriptions that makes the following:
let vertex_attrib_descs = [
vk::VertexInputAttributeDescription {
binding: 0,
location: 0,
offset: 0,
format: vk::Format::R32G32B32_SFLOAT,
},
vk::VertexInputAttributeDescription {
binding: 1,
location: 1,
offset: 0,
format: vk::Format::R32G32B32_SFLOAT,
},
vk::VertexInputAttributeDescription {
binding: 1,
location: 2,
offset: 12,
format: vk::Format::R32G32B32_SFLOAT,
},
];
let vertex_binding_descs = [
vk::VertexInputBindingDescription {
binding: 0,
stride: 12,
input_rate: vk::VertexInputRate::VERTEX,
},
vk::VertexInputBindingDescription {
binding: 1,
stride: 24,
input_rate: vk::VertexInputRate::INSTANCE,
},
];
And for the types we have left general above: V = [f32; 3]
, I = [f32; 6]
.
Let us create a cube:
impl Model<[f32; 3], [f32; 6]> {
fn cube() -> Model<[f32; 3], [f32; 6]> {
Model {
vertexdata: vec![],
handle_to_index: std::collections::HashMap::new(),
handles: Vec::new(),
instances: Vec::new(),
first_invisible: 0,
next_handle: 0,
vertexbuffer: None,
instancebuffer: None,
}
}
}
This is, obviously, still lacking the points. Let’s include them. First define the points:
let lbf = [-0.1,0.1,0.0]; //lbf: left-bottom-front
let lbb = [-0.1,0.1,0.1];
let ltf = [-0.1,-0.1,0.0];
let ltb = [-0.1,-0.1,0.1];
let rbf = [0.1,0.1,0.0];
let rbb = [0.1,0.1,0.1];
let rtf = [0.1,-0.1,0.0];
let rtb = [0.1,-0.1,0.1];
Then we insert them into vertexdata
. The bottom face of the cube, for example, consists of the two triangles “lbf-lbb-rbb” and “lbf-rbb-rbf”,
therefore the line about vertexdata will begin: vertexdata: vec![lbf,lbb,rbb,lbf,rbb,rbf
.
impl Model<[f32; 3], [f32; 6]> {
fn cube() -> Model<[f32; 3], [f32; 6]> {
let lbf = [-0.1, 0.1, 0.0]; //lbf: left-bottom-front
let lbb = [-0.1, 0.1, 0.1];
let ltf = [-0.1, -0.1, 0.0];
let ltb = [-0.1, -0.1, 0.1];
let rbf = [0.1, 0.1, 0.0];
let rbb = [0.1, 0.1, 0.1];
let rtf = [0.1, -0.1, 0.0];
let rtb = [0.1, -0.1, 0.1];
Model {
vertexdata: vec![
lbf, lbb, rbb, lbf, rbb, rbf, //bottom
ltf, rtb, ltb, ltf, rtf, rtb, //top
lbf, rtf, ltf, lbf, rbf, rtf, //front
lbb, ltb, rtb, lbb, rtb, rbb, //back
lbf, ltf, lbb, lbb, ltf, ltb, //left
rbf, rbb, rtf, rbb, rtb, rtf, //right
],
handle_to_index: std::collections::HashMap::new(),
handles: Vec::new(),
instances: Vec::new(),
first_invisible: 0,
next_handle: 0,
vertexbuffer: None,
instancebuffer: None,
}
}
}
A box!
Let us actually create one (a red one at the origin):
let mut cube = Model::cube();
cube.insert_visibly([0.0, 0.0, 0.0, 1.0, 0.0, 0.0]);
cube.update_vertexbuffer(&allocator);
cube.update_instancebuffer(&allocator);
let models = vec![cube];
We can replace the buffers in Aetna
by this Vec
of models. The drop
function should also be adjusted:
for m in &self.models {
if let Some(vb) = &m.vertexbuffer {
self.allocator
.destroy_buffer(vb.buffer, &vb.allocation)
.expect("problem with buffer destruction");
}
if let Some(ib) = &m.instancebuffer {
self.allocator
.destroy_buffer(ib.buffer, &ib.allocation)
.expect("problem with buffer destruction");
}
}
Those were a lot of code changes. Better let’s include all source code again. Shaders first:
#version 450
layout (location=0) in vec3 position;
layout (location=1) in vec3 position_offset;
layout (location=2) in vec3 colour;
layout (location=0) out vec4 colourdata_for_the_fragmentshader;
void main() {
gl_Position = vec4(position+position_offset,1.0);
colourdata_for_the_fragmentshader=vec4(colour,1.0);
}
and
#version 450
layout (location=0) out vec4 theColour;
layout (location=0) in vec4 data_from_the_vertexshader;
void main(){
theColour= data_from_the_vertexshader;
}
and main.rs
:
use ash::version::DeviceV1_0;
use ash::version::EntryV1_0;
use ash::version::InstanceV1_0;
use ash::vk;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let eventloop = winit::event_loop::EventLoop::new();
let window = winit::window::Window::new(&eventloop)?;
let mut aetna = Aetna::init(window)?;
use winit::event::{Event, WindowEvent};
eventloop.run(move |event, _, controlflow| match event {
Event::WindowEvent {
event: WindowEvent::CloseRequested,
..
} => {
*controlflow = winit::event_loop::ControlFlow::Exit;
}
Event::MainEventsCleared => {
// doing the work here (later)
aetna.window.request_redraw();
}
Event::RedrawRequested(_) => {
let (image_index, _) = unsafe {
aetna
.swapchain
.swapchain_loader
.acquire_next_image(
aetna.swapchain.swapchain,
std::u64::MAX,
aetna.swapchain.image_available[aetna.swapchain.current_image],
vk::Fence::null(),
)
.expect("image acquisition trouble")
};
unsafe {
aetna
.device
.wait_for_fences(
&[aetna.swapchain.may_begin_drawing[aetna.swapchain.current_image]],
true,
std::u64::MAX,
)
.expect("fence-waiting");
aetna
.device
.reset_fences(&[
aetna.swapchain.may_begin_drawing[aetna.swapchain.current_image]
])
.expect("resetting fences");
}
let semaphores_available =
[aetna.swapchain.image_available[aetna.swapchain.current_image]];
let waiting_stages = [vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT];
let semaphores_finished =
[aetna.swapchain.rendering_finished[aetna.swapchain.current_image]];
let commandbuffers = [aetna.commandbuffers[image_index as usize]];
let submit_info = [vk::SubmitInfo::builder()
.wait_semaphores(&semaphores_available)
.wait_dst_stage_mask(&waiting_stages)
.command_buffers(&commandbuffers)
.signal_semaphores(&semaphores_finished)
.build()];
unsafe {
aetna
.device
.queue_submit(
aetna.queues.graphics_queue,
&submit_info,
aetna.swapchain.may_begin_drawing[aetna.swapchain.current_image],
)
.expect("queue submission");
};
let swapchains = [aetna.swapchain.swapchain];
let indices = [image_index];
let present_info = vk::PresentInfoKHR::builder()
.wait_semaphores(&semaphores_finished)
.swapchains(&swapchains)
.image_indices(&indices);
unsafe {
aetna
.swapchain
.swapchain_loader
.queue_present(aetna.queues.graphics_queue, &present_info)
.expect("queue presentation");
};
aetna.swapchain.current_image =
(aetna.swapchain.current_image + 1) % aetna.swapchain.amount_of_images as usize;
}
_ => {}
});
}
unsafe extern "system" fn vulkan_debug_utils_callback(
message_severity: vk::DebugUtilsMessageSeverityFlagsEXT,
message_type: vk::DebugUtilsMessageTypeFlagsEXT,
p_callback_data: *const vk::DebugUtilsMessengerCallbackDataEXT,
_p_user_data: *mut std::ffi::c_void,
) -> vk::Bool32 {
let message = std::ffi::CStr::from_ptr((*p_callback_data).p_message);
let severity = format!("{:?}", message_severity).to_lowercase();
let ty = format!("{:?}", message_type).to_lowercase();
println!("[Debug][{}][{}] {:?}", severity, ty, message);
vk::FALSE
}
fn init_instance(
entry: &ash::Entry,
layer_names: &[&str],
) -> Result<ash::Instance, ash::InstanceError> {
let enginename = std::ffi::CString::new("UnknownGameEngine").unwrap();
let appname = std::ffi::CString::new("The Black Window").unwrap();
let app_info = vk::ApplicationInfo::builder()
.application_name(&appname)
.application_version(vk::make_version(0, 0, 1))
.engine_name(&enginename)
.engine_version(vk::make_version(0, 42, 0))
.api_version(vk::make_version(1, 0, 106));
let layer_names_c: Vec<std::ffi::CString> = layer_names
.iter()
.map(|&ln| std::ffi::CString::new(ln).unwrap())
.collect();
let layer_name_pointers: Vec<*const i8> = layer_names_c
.iter()
.map(|layer_name| layer_name.as_ptr())
.collect();
let extension_name_pointers: Vec<*const i8> = vec![
ash::extensions::ext::DebugUtils::name().as_ptr(),
ash::extensions::khr::Surface::name().as_ptr(),
ash::extensions::khr::XlibSurface::name().as_ptr(),
];
let mut debugcreateinfo = vk::DebugUtilsMessengerCreateInfoEXT::builder()
.message_severity(
vk::DebugUtilsMessageSeverityFlagsEXT::WARNING
| vk::DebugUtilsMessageSeverityFlagsEXT::VERBOSE
| vk::DebugUtilsMessageSeverityFlagsEXT::ERROR,
)
.message_type(
vk::DebugUtilsMessageTypeFlagsEXT::GENERAL
| vk::DebugUtilsMessageTypeFlagsEXT::PERFORMANCE
| vk::DebugUtilsMessageTypeFlagsEXT::VALIDATION,
)
.pfn_user_callback(Some(vulkan_debug_utils_callback));
let instance_create_info = vk::InstanceCreateInfo::builder()
.push_next(&mut debugcreateinfo)
.application_info(&app_info)
.enabled_layer_names(&layer_name_pointers)
.enabled_extension_names(&extension_name_pointers);
unsafe { entry.create_instance(&instance_create_info, None) }
}
struct DebugDongXi {
loader: ash::extensions::ext::DebugUtils,
messenger: vk::DebugUtilsMessengerEXT,
}
impl DebugDongXi {
fn init(entry: &ash::Entry, instance: &ash::Instance) -> Result<DebugDongXi, vk::Result> {
let debugcreateinfo = vk::DebugUtilsMessengerCreateInfoEXT::builder()
.message_severity(
vk::DebugUtilsMessageSeverityFlagsEXT::WARNING
| vk::DebugUtilsMessageSeverityFlagsEXT::VERBOSE
| vk::DebugUtilsMessageSeverityFlagsEXT::INFO
| vk::DebugUtilsMessageSeverityFlagsEXT::ERROR,
)
.message_type(
vk::DebugUtilsMessageTypeFlagsEXT::GENERAL
| vk::DebugUtilsMessageTypeFlagsEXT::PERFORMANCE
| vk::DebugUtilsMessageTypeFlagsEXT::VALIDATION,
)
.pfn_user_callback(Some(vulkan_debug_utils_callback));
let loader = ash::extensions::ext::DebugUtils::new(entry, instance);
let messenger = unsafe { loader.create_debug_utils_messenger(&debugcreateinfo, None)? };
Ok(DebugDongXi { loader, messenger })
}
}
impl Drop for DebugDongXi {
fn drop(&mut self) {
unsafe {
self.loader
.destroy_debug_utils_messenger(self.messenger, None)
};
}
}
struct SurfaceDongXi {
xlib_surface_loader: ash::extensions::khr::XlibSurface,
surface: vk::SurfaceKHR,
surface_loader: ash::extensions::khr::Surface,
}
impl SurfaceDongXi {
fn init(
window: &winit::window::Window,
entry: &ash::Entry,
instance: &ash::Instance,
) -> Result<SurfaceDongXi, vk::Result> {
use winit::platform::unix::WindowExtUnix;
let x11_display = window.xlib_display().unwrap();
let x11_window = window.xlib_window().unwrap();
let x11_create_info = vk::XlibSurfaceCreateInfoKHR::builder()
.window(x11_window)
.dpy(x11_display as *mut vk::Display);
let xlib_surface_loader = ash::extensions::khr::XlibSurface::new(entry, instance);
let surface = unsafe { xlib_surface_loader.create_xlib_surface(&x11_create_info, None) }?;
let surface_loader = ash::extensions::khr::Surface::new(entry, instance);
Ok(SurfaceDongXi {
xlib_surface_loader,
surface,
surface_loader,
})
}
fn get_capabilities(
&self,
physical_device: vk::PhysicalDevice,
) -> Result<vk::SurfaceCapabilitiesKHR, vk::Result> {
unsafe {
self.surface_loader
.get_physical_device_surface_capabilities(physical_device, self.surface)
}
}
fn get_present_modes(
&self,
physical_device: vk::PhysicalDevice,
) -> Result<Vec<vk::PresentModeKHR>, vk::Result> {
unsafe {
self.surface_loader
.get_physical_device_surface_present_modes(physical_device, self.surface)
}
}
fn get_formats(
&self,
physical_device: vk::PhysicalDevice,
) -> Result<Vec<vk::SurfaceFormatKHR>, vk::Result> {
unsafe {
self.surface_loader
.get_physical_device_surface_formats(physical_device, self.surface)
}
}
fn get_physical_device_surface_support(
&self,
physical_device: vk::PhysicalDevice,
queuefamilyindex: usize,
) -> Result<bool, vk::Result> {
unsafe {
self.surface_loader.get_physical_device_surface_support(
physical_device,
queuefamilyindex as u32,
self.surface,
)
}
}
}
impl Drop for SurfaceDongXi {
fn drop(&mut self) {
unsafe {
self.surface_loader.destroy_surface(self.surface, None);
}
}
}
fn init_physical_device_and_properties(
instance: &ash::Instance,
) -> Result<
(
vk::PhysicalDevice,
vk::PhysicalDeviceProperties,
vk::PhysicalDeviceFeatures,
),
vk::Result,
> {
let phys_devs = unsafe { instance.enumerate_physical_devices()? };
let mut chosen = None;
for p in phys_devs {
let properties = unsafe { instance.get_physical_device_properties(p) };
let features = unsafe { instance.get_physical_device_features(p) };
if properties.device_type == vk::PhysicalDeviceType::DISCRETE_GPU {
chosen = Some((p, properties, features));
}
}
Ok(chosen.unwrap())
}
struct QueueFamilies {
graphics_q_index: Option<u32>,
transfer_q_index: Option<u32>,
}
impl QueueFamilies {
fn init(
instance: &ash::Instance,
physical_device: vk::PhysicalDevice,
surfaces: &SurfaceDongXi,
) -> Result<QueueFamilies, vk::Result> {
let queuefamilyproperties =
unsafe { instance.get_physical_device_queue_family_properties(physical_device) };
let mut found_graphics_q_index = None;
let mut found_transfer_q_index = None;
for (index, qfam) in queuefamilyproperties.iter().enumerate() {
if qfam.queue_count > 0
&& qfam.queue_flags.contains(vk::QueueFlags::GRAPHICS)
&& surfaces.get_physical_device_surface_support(physical_device, index)?
{
found_graphics_q_index = Some(index as u32);
}
if qfam.queue_count > 0 && qfam.queue_flags.contains(vk::QueueFlags::TRANSFER) {
if found_transfer_q_index.is_none()
|| !qfam.queue_flags.contains(vk::QueueFlags::GRAPHICS)
{
found_transfer_q_index = Some(index as u32);
}
}
}
Ok(QueueFamilies {
graphics_q_index: found_graphics_q_index,
transfer_q_index: found_transfer_q_index,
})
}
}
struct Queues {
graphics_queue: vk::Queue,
transfer_queue: vk::Queue,
}
fn init_device_and_queues(
instance: &ash::Instance,
physical_device: vk::PhysicalDevice,
queue_families: &QueueFamilies,
layer_names: &[&str],
) -> Result<(ash::Device, Queues), vk::Result> {
let layer_names_c: Vec<std::ffi::CString> = layer_names
.iter()
.map(|&ln| std::ffi::CString::new(ln).unwrap())
.collect();
let layer_name_pointers: Vec<*const i8> = layer_names_c
.iter()
.map(|layer_name| layer_name.as_ptr())
.collect();
let priorities = [1.0f32];
let queue_infos = [
vk::DeviceQueueCreateInfo::builder()
.queue_family_index(queue_families.graphics_q_index.unwrap())
.queue_priorities(&priorities)
.build(),
vk::DeviceQueueCreateInfo::builder()
.queue_family_index(queue_families.transfer_q_index.unwrap())
.queue_priorities(&priorities)
.build(),
];
let device_extension_name_pointers: Vec<*const i8> =
vec![ash::extensions::khr::Swapchain::name().as_ptr()];
let features = vk::PhysicalDeviceFeatures::builder().fill_mode_non_solid(true);
let device_create_info = vk::DeviceCreateInfo::builder()
.queue_create_infos(&queue_infos)
.enabled_extension_names(&device_extension_name_pointers)
.enabled_layer_names(&layer_name_pointers)
.enabled_features(&features);
let logical_device =
unsafe { instance.create_device(physical_device, &device_create_info, None)? };
let graphics_queue =
unsafe { logical_device.get_device_queue(queue_families.graphics_q_index.unwrap(), 0) };
let transfer_queue =
unsafe { logical_device.get_device_queue(queue_families.transfer_q_index.unwrap(), 0) };
Ok((
logical_device,
Queues {
graphics_queue,
transfer_queue,
},
))
}
struct SwapchainDongXi {
swapchain_loader: ash::extensions::khr::Swapchain,
swapchain: vk::SwapchainKHR,
images: Vec<vk::Image>,
imageviews: Vec<vk::ImageView>,
framebuffers: Vec<vk::Framebuffer>,
surface_format: vk::SurfaceFormatKHR,
extent: vk::Extent2D,
image_available: Vec<vk::Semaphore>,
rendering_finished: Vec<vk::Semaphore>,
may_begin_drawing: Vec<vk::Fence>,
amount_of_images: u32,
current_image: usize,
}
impl SwapchainDongXi {
fn init(
instance: &ash::Instance,
physical_device: vk::PhysicalDevice,
logical_device: &ash::Device,
surfaces: &SurfaceDongXi,
queue_families: &QueueFamilies,
queues: &Queues,
) -> Result<SwapchainDongXi, vk::Result> {
let surface_capabilities = surfaces.get_capabilities(physical_device)?;
let extent = surface_capabilities.current_extent;
let surface_present_modes = surfaces.get_present_modes(physical_device)?;
let surface_format = *surfaces.get_formats(physical_device)?.first().unwrap();
let queuefamilies = [queue_families.graphics_q_index.unwrap()];
let swapchain_create_info = vk::SwapchainCreateInfoKHR::builder()
.surface(surfaces.surface)
.min_image_count(
3.max(surface_capabilities.min_image_count)
.min(surface_capabilities.max_image_count),
)
.image_format(surface_format.format)
.image_color_space(surface_format.color_space)
.image_extent(extent)
.image_array_layers(1)
.image_usage(vk::ImageUsageFlags::COLOR_ATTACHMENT)
.image_sharing_mode(vk::SharingMode::EXCLUSIVE)
.queue_family_indices(&queuefamilies)
.pre_transform(surface_capabilities.current_transform)
.composite_alpha(vk::CompositeAlphaFlagsKHR::OPAQUE)
.present_mode(vk::PresentModeKHR::FIFO);
let swapchain_loader = ash::extensions::khr::Swapchain::new(instance, logical_device);
let swapchain = unsafe { swapchain_loader.create_swapchain(&swapchain_create_info, None)? };
let swapchain_images = unsafe { swapchain_loader.get_swapchain_images(swapchain)? };
let amount_of_images = swapchain_images.len() as u32;
let mut swapchain_imageviews = Vec::with_capacity(swapchain_images.len());
for image in &swapchain_images {
let subresource_range = vk::ImageSubresourceRange::builder()
.aspect_mask(vk::ImageAspectFlags::COLOR)
.base_mip_level(0)
.level_count(1)
.base_array_layer(0)
.layer_count(1);
let imageview_create_info = vk::ImageViewCreateInfo::builder()
.image(*image)
.view_type(vk::ImageViewType::TYPE_2D)
.format(vk::Format::B8G8R8A8_UNORM)
.subresource_range(*subresource_range);
let imageview =
unsafe { logical_device.create_image_view(&imageview_create_info, None) }?;
swapchain_imageviews.push(imageview);
}
let mut image_available = vec![];
let mut rendering_finished = vec![];
let mut may_begin_drawing = vec![];
let semaphoreinfo = vk::SemaphoreCreateInfo::builder();
let fenceinfo = vk::FenceCreateInfo::builder().flags(vk::FenceCreateFlags::SIGNALED);
for _ in 0..amount_of_images {
let semaphore_available =
unsafe { logical_device.create_semaphore(&semaphoreinfo, None) }?;
let semaphore_finished =
unsafe { logical_device.create_semaphore(&semaphoreinfo, None) }?;
image_available.push(semaphore_available);
rendering_finished.push(semaphore_finished);
let fence = unsafe { logical_device.create_fence(&fenceinfo, None) }?;
may_begin_drawing.push(fence);
}
Ok(SwapchainDongXi {
swapchain_loader,
swapchain,
images: swapchain_images,
imageviews: swapchain_imageviews,
framebuffers: vec![],
surface_format,
extent,
amount_of_images,
current_image: 0,
image_available,
rendering_finished,
may_begin_drawing,
})
}
fn create_framebuffers(
&mut self,
logical_device: &ash::Device,
renderpass: vk::RenderPass,
) -> Result<(), vk::Result> {
for iv in &self.imageviews {
let iview = [*iv];
let framebuffer_info = vk::FramebufferCreateInfo::builder()
.render_pass(renderpass)
.attachments(&iview)
.width(self.extent.width)
.height(self.extent.height)
.layers(1);
let fb = unsafe { logical_device.create_framebuffer(&framebuffer_info, None) }?;
self.framebuffers.push(fb);
}
Ok(())
}
unsafe fn cleanup(&mut self, logical_device: &ash::Device) {
for fence in &self.may_begin_drawing {
logical_device.destroy_fence(*fence, None);
}
for semaphore in &self.image_available {
logical_device.destroy_semaphore(*semaphore, None);
}
for semaphore in &self.rendering_finished {
logical_device.destroy_semaphore(*semaphore, None);
}
for fb in &self.framebuffers {
logical_device.destroy_framebuffer(*fb, None);
}
for iv in &self.imageviews {
logical_device.destroy_image_view(*iv, None);
}
self.swapchain_loader
.destroy_swapchain(self.swapchain, None)
}
}
fn init_renderpass(
logical_device: &ash::Device,
physical_device: vk::PhysicalDevice,
format: vk::Format,
) -> Result<vk::RenderPass, vk::Result> {
let attachments = [vk::AttachmentDescription::builder()
.format(format)
.load_op(vk::AttachmentLoadOp::CLEAR)
.store_op(vk::AttachmentStoreOp::STORE)
.stencil_load_op(vk::AttachmentLoadOp::DONT_CARE)
.stencil_store_op(vk::AttachmentStoreOp::DONT_CARE)
.initial_layout(vk::ImageLayout::UNDEFINED)
.final_layout(vk::ImageLayout::PRESENT_SRC_KHR)
.samples(vk::SampleCountFlags::TYPE_1)
.build()];
let color_attachment_references = [vk::AttachmentReference {
attachment: 0,
layout: vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL,
}];
let subpasses = [vk::SubpassDescription::builder()
.color_attachments(&color_attachment_references)
.pipeline_bind_point(vk::PipelineBindPoint::GRAPHICS)
.build()];
let subpass_dependencies = [vk::SubpassDependency::builder()
.src_subpass(vk::SUBPASS_EXTERNAL)
.src_stage_mask(vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT)
.dst_subpass(0)
.dst_stage_mask(vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT)
.dst_access_mask(
vk::AccessFlags::COLOR_ATTACHMENT_READ | vk::AccessFlags::COLOR_ATTACHMENT_WRITE,
)
.build()];
let renderpass_info = vk::RenderPassCreateInfo::builder()
.attachments(&attachments)
.subpasses(&subpasses)
.dependencies(&subpass_dependencies);
let renderpass = unsafe { logical_device.create_render_pass(&renderpass_info, None)? };
Ok(renderpass)
}
struct Pipeline {
pipeline: vk::Pipeline,
layout: vk::PipelineLayout,
}
impl Pipeline {
fn cleanup(&self, logical_device: &ash::Device) {
unsafe {
logical_device.destroy_pipeline(self.pipeline, None);
logical_device.destroy_pipeline_layout(self.layout, None);
}
}
fn init(
logical_device: &ash::Device,
swapchain: &SwapchainDongXi,
renderpass: &vk::RenderPass,
) -> Result<Pipeline, vk::Result> {
let vertexshader_createinfo = vk::ShaderModuleCreateInfo::builder().code(
vk_shader_macros::include_glsl!("./shaders/shader.vert", kind: vert),
);
let vertexshader_module =
unsafe { logical_device.create_shader_module(&vertexshader_createinfo, None)? };
let fragmentshader_createinfo = vk::ShaderModuleCreateInfo::builder()
.code(vk_shader_macros::include_glsl!("./shaders/shader.frag"));
let fragmentshader_module =
unsafe { logical_device.create_shader_module(&fragmentshader_createinfo, None)? };
let mainfunctionname = std::ffi::CString::new("main").unwrap();
let vertexshader_stage = vk::PipelineShaderStageCreateInfo::builder()
.stage(vk::ShaderStageFlags::VERTEX)
.module(vertexshader_module)
.name(&mainfunctionname);
let fragmentshader_stage = vk::PipelineShaderStageCreateInfo::builder()
.stage(vk::ShaderStageFlags::FRAGMENT)
.module(fragmentshader_module)
.name(&mainfunctionname);
let shader_stages = vec![vertexshader_stage.build(), fragmentshader_stage.build()];
let vertex_attrib_descs = [
vk::VertexInputAttributeDescription {
binding: 0,
location: 0,
offset: 0,
format: vk::Format::R32G32B32_SFLOAT,
},
vk::VertexInputAttributeDescription {
binding: 1,
location: 1,
offset: 0,
format: vk::Format::R32G32B32_SFLOAT,
},
vk::VertexInputAttributeDescription {
binding: 1,
location: 2,
offset: 12,
format: vk::Format::R32G32B32_SFLOAT,
},
];
let vertex_binding_descs = [
vk::VertexInputBindingDescription {
binding: 0,
stride: 12,
input_rate: vk::VertexInputRate::VERTEX,
},
vk::VertexInputBindingDescription {
binding: 1,
stride: 24,
input_rate: vk::VertexInputRate::INSTANCE,
},
];
let vertex_input_info = vk::PipelineVertexInputStateCreateInfo::builder()
.vertex_attribute_descriptions(&vertex_attrib_descs)
.vertex_binding_descriptions(&vertex_binding_descs);
let input_assembly_info = vk::PipelineInputAssemblyStateCreateInfo::builder()
.topology(vk::PrimitiveTopology::TRIANGLE_LIST);
let viewports = [vk::Viewport {
x: 0.,
y: 0.,
width: swapchain.extent.width as f32,
height: swapchain.extent.height as f32,
min_depth: 0.,
max_depth: 1.,
}];
let scissors = [vk::Rect2D {
offset: vk::Offset2D { x: 0, y: 0 },
extent: swapchain.extent,
}];
let viewport_info = vk::PipelineViewportStateCreateInfo::builder()
.viewports(&viewports)
.scissors(&scissors);
let rasterizer_info = vk::PipelineRasterizationStateCreateInfo::builder()
.line_width(1.0)
.front_face(vk::FrontFace::COUNTER_CLOCKWISE)
.cull_mode(vk::CullModeFlags::NONE)
.polygon_mode(vk::PolygonMode::FILL);
let multisampler_info = vk::PipelineMultisampleStateCreateInfo::builder()
.rasterization_samples(vk::SampleCountFlags::TYPE_1);
let colourblend_attachments = [vk::PipelineColorBlendAttachmentState::builder()
.blend_enable(true)
.src_color_blend_factor(vk::BlendFactor::SRC_ALPHA)
.dst_color_blend_factor(vk::BlendFactor::ONE_MINUS_SRC_ALPHA)
.color_blend_op(vk::BlendOp::ADD)
.src_alpha_blend_factor(vk::BlendFactor::SRC_ALPHA)
.dst_alpha_blend_factor(vk::BlendFactor::ONE_MINUS_SRC_ALPHA)
.alpha_blend_op(vk::BlendOp::ADD)
.color_write_mask(
vk::ColorComponentFlags::R
| vk::ColorComponentFlags::G
| vk::ColorComponentFlags::B
| vk::ColorComponentFlags::A,
)
.build()];
let colourblend_info =
vk::PipelineColorBlendStateCreateInfo::builder().attachments(&colourblend_attachments);
let pipelinelayout_info = vk::PipelineLayoutCreateInfo::builder();
let pipelinelayout =
unsafe { logical_device.create_pipeline_layout(&pipelinelayout_info, None) }?;
let pipeline_info = vk::GraphicsPipelineCreateInfo::builder()
.stages(&shader_stages)
.vertex_input_state(&vertex_input_info)
.input_assembly_state(&input_assembly_info)
.viewport_state(&viewport_info)
.rasterization_state(&rasterizer_info)
.multisample_state(&multisampler_info)
.color_blend_state(&colourblend_info)
.layout(pipelinelayout)
.render_pass(*renderpass)
.subpass(0);
let graphicspipeline = unsafe {
logical_device
.create_graphics_pipelines(
vk::PipelineCache::null(),
&[pipeline_info.build()],
None,
)
.expect("A problem with the pipeline creation")
}[0];
unsafe {
logical_device.destroy_shader_module(fragmentshader_module, None);
logical_device.destroy_shader_module(vertexshader_module, None);
}
Ok(Pipeline {
pipeline: graphicspipeline,
layout: pipelinelayout,
})
}
}
struct Pools {
commandpool_graphics: vk::CommandPool,
commandpool_transfer: vk::CommandPool,
}
impl Pools {
fn init(
logical_device: &ash::Device,
queue_families: &QueueFamilies,
) -> Result<Pools, vk::Result> {
let graphics_commandpool_info = vk::CommandPoolCreateInfo::builder()
.queue_family_index(queue_families.graphics_q_index.unwrap())
.flags(vk::CommandPoolCreateFlags::RESET_COMMAND_BUFFER);
let commandpool_graphics =
unsafe { logical_device.create_command_pool(&graphics_commandpool_info, None) }?;
let transfer_commandpool_info = vk::CommandPoolCreateInfo::builder()
.queue_family_index(queue_families.transfer_q_index.unwrap())
.flags(vk::CommandPoolCreateFlags::RESET_COMMAND_BUFFER);
let commandpool_transfer =
unsafe { logical_device.create_command_pool(&transfer_commandpool_info, None) }?;
Ok(Pools {
commandpool_graphics,
commandpool_transfer,
})
}
fn cleanup(&self, logical_device: &ash::Device) {
unsafe {
logical_device.destroy_command_pool(self.commandpool_graphics, None);
logical_device.destroy_command_pool(self.commandpool_transfer, None);
}
}
}
fn create_commandbuffers(
logical_device: &ash::Device,
pools: &Pools,
amount: u32,
) -> Result<Vec<vk::CommandBuffer>, vk::Result> {
let commandbuf_allocate_info = vk::CommandBufferAllocateInfo::builder()
.command_pool(pools.commandpool_graphics)
.command_buffer_count(amount);
unsafe { logical_device.allocate_command_buffers(&commandbuf_allocate_info) }
}
fn fill_commandbuffers(
commandbuffers: &[vk::CommandBuffer],
logical_device: &ash::Device,
renderpass: &vk::RenderPass,
swapchain: &SwapchainDongXi,
pipeline: &Pipeline,
models: &Vec<Model<[f32; 3], [f32; 6]>>,
) -> Result<(), vk::Result> {
for (i, &commandbuffer) in commandbuffers.iter().enumerate() {
let commandbuffer_begininfo = vk::CommandBufferBeginInfo::builder();
unsafe {
logical_device.begin_command_buffer(commandbuffer, &commandbuffer_begininfo)?;
}
let clearvalues = [vk::ClearValue {
color: vk::ClearColorValue {
float32: [0.0, 0.0, 0.08, 1.0],
},
}];
let renderpass_begininfo = vk::RenderPassBeginInfo::builder()
.render_pass(*renderpass)
.framebuffer(swapchain.framebuffers[i])
.render_area(vk::Rect2D {
offset: vk::Offset2D { x: 0, y: 0 },
extent: swapchain.extent,
})
.clear_values(&clearvalues);
unsafe {
logical_device.cmd_begin_render_pass(
commandbuffer,
&renderpass_begininfo,
vk::SubpassContents::INLINE,
);
logical_device.cmd_bind_pipeline(
commandbuffer,
vk::PipelineBindPoint::GRAPHICS,
pipeline.pipeline,
);
for m in models {
m.draw(logical_device, commandbuffer);
}
logical_device.cmd_end_render_pass(commandbuffer);
logical_device.end_command_buffer(commandbuffer)?;
}
}
Ok(())
}
struct Buffer {
buffer: vk::Buffer,
allocation: vk_mem::Allocation,
allocation_info: vk_mem::AllocationInfo,
size_in_bytes: u64,
buffer_usage: vk::BufferUsageFlags,
memory_usage: vk_mem::MemoryUsage,
}
impl Buffer {
fn new(
allocator: &vk_mem::Allocator,
size_in_bytes: u64,
buffer_usage: vk::BufferUsageFlags,
memory_usage: vk_mem::MemoryUsage,
) -> Result<Buffer, vk_mem::error::Error> {
let allocation_create_info = vk_mem::AllocationCreateInfo {
usage: memory_usage,
..Default::default()
};
let (buffer, allocation, allocation_info) = allocator.create_buffer(
&ash::vk::BufferCreateInfo::builder()
.size(size_in_bytes)
.usage(buffer_usage)
.build(),
&allocation_create_info,
)?;
Ok(Buffer {
buffer,
allocation,
allocation_info,
size_in_bytes,
buffer_usage,
memory_usage,
})
}
fn fill<T: Sized>(
&mut self,
allocator: &vk_mem::Allocator,
data: &[T],
) -> Result<(), vk_mem::error::Error> {
let bytes_to_write = (data.len() * std::mem::size_of::<T>()) as u64;
if bytes_to_write > self.size_in_bytes {
allocator.destroy_buffer(self.buffer, &self.allocation);
let newbuffer = Buffer::new(
allocator,
bytes_to_write,
self.buffer_usage,
self.memory_usage,
)?;
*self = newbuffer;
}
let data_ptr = allocator.map_memory(&self.allocation)? as *mut T;
unsafe { data_ptr.copy_from_nonoverlapping(data.as_ptr(), data.len()) };
allocator.unmap_memory(&self.allocation)?;
Ok(())
}
}
#[derive(Debug, Clone)]
struct InvalidHandle;
impl std::fmt::Display for InvalidHandle {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(f, "invalid handle")
}
}
impl std::error::Error for InvalidHandle {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
None
}
}
struct Model<V, I> {
vertexdata: Vec<V>,
handle_to_index: std::collections::HashMap<usize, usize>,
handles: Vec<usize>,
instances: Vec<I>,
first_invisible: usize,
next_handle: usize,
vertexbuffer: Option<Buffer>,
instancebuffer: Option<Buffer>,
}
impl<V, I> Model<V, I> {
fn get(&self, handle: usize) -> Option<&I> {
if let Some(&index) = self.handle_to_index.get(&handle) {
self.instances.get(index)
} else {
None
}
}
fn get_mut(&mut self, handle: usize) -> Option<&mut I> {
if let Some(&index) = self.handle_to_index.get(&handle) {
self.instances.get_mut(index)
} else {
None
}
}
fn is_visible(&self, handle: usize) -> Result<bool, InvalidHandle> {
if let Some(index) = self.handle_to_index.get(&handle) {
Ok(index < &self.first_invisible)
} else {
Err(InvalidHandle)
}
}
fn make_visible(&mut self, handle: usize) -> Result<(), InvalidHandle> {
//if already visible: do nothing
if let Some(&index) = self.handle_to_index.get(&handle) {
if index < self.first_invisible {
return Ok(());
}
//else: move to position first_invisible and increase value of first_invisible
self.swap_by_index(index, self.first_invisible);
self.first_invisible += 1;
Ok(())
} else {
Err(InvalidHandle)
}
}
fn make_invisible(&mut self, handle: usize) -> Result<(), InvalidHandle> {
//if already invisible: do nothing
if let Some(&index) = self.handle_to_index.get(&handle) {
if index >= self.first_invisible {
return Ok(());
}
//else: move to position before first_invisible and decrease value of first_invisible
self.swap_by_index(index, self.first_invisible - 1);
self.first_invisible -= 1;
Ok(())
} else {
Err(InvalidHandle)
}
}
fn insert(&mut self, element: I) -> usize {
let handle = self.next_handle;
self.next_handle += 1;
let index = self.instances.len();
self.instances.push(element);
self.handles.push(handle);
self.handle_to_index.insert(handle, index);
handle
}
fn insert_visibly(&mut self, element: I) -> usize {
let new_handle = self.insert(element);
self.make_visible(new_handle).ok();
new_handle
}
fn remove(&mut self, handle: usize) -> Result<I, InvalidHandle> {
if let Some(&index) = self.handle_to_index.get(&handle) {
if index < self.first_invisible {
self.swap_by_index(index, self.first_invisible - 1);
self.first_invisible -= 1;
}
self.swap_by_index(self.first_invisible, self.instances.len() - 1);
self.handles.pop();
self.handle_to_index.remove(&handle);
//must be Some(), otherwise we couldn't have found an index
Ok(self.instances.pop().unwrap())
} else {
Err(InvalidHandle)
}
}
fn swap_by_handle(&mut self, handle1: usize, handle2: usize) -> Result<(), InvalidHandle> {
if handle1 == handle2 {
return Ok(());
}
if let (Some(&index1), Some(&index2)) = (
self.handle_to_index.get(&handle1),
self.handle_to_index.get(&handle2),
) {
self.handles.swap(index1, index2);
self.instances.swap(index1, index2);
self.handle_to_index.insert(index1, handle2);
self.handle_to_index.insert(index2, handle1);
Ok(())
} else {
Err(InvalidHandle)
}
}
fn swap_by_index(&mut self, index1: usize, index2: usize) {
if index1 == index2 {
return;
}
let handle1 = self.handles[index1];
let handle2 = self.handles[index2];
self.handles.swap(index1, index2);
self.instances.swap(index1, index2);
self.handle_to_index.insert(index1, handle2);
self.handle_to_index.insert(index2, handle1);
}
fn update_vertexbuffer(
&mut self,
allocator: &vk_mem::Allocator,
) -> Result<(), vk_mem::error::Error> {
if let Some(buffer) = &mut self.vertexbuffer {
buffer.fill(allocator, &self.vertexdata)?;
Ok(())
} else {
let bytes = (self.vertexdata.len() * std::mem::size_of::<V>()) as u64;
let mut buffer = Buffer::new(
&allocator,
bytes,
vk::BufferUsageFlags::VERTEX_BUFFER,
vk_mem::MemoryUsage::CpuToGpu,
)?;
buffer.fill(allocator, &self.vertexdata)?;
self.vertexbuffer = Some(buffer);
Ok(())
}
}
fn update_instancebuffer(
&mut self,
allocator: &vk_mem::Allocator,
) -> Result<(), vk_mem::error::Error> {
if let Some(buffer) = &mut self.instancebuffer {
buffer.fill(allocator, &self.instances[0..self.first_invisible])?;
Ok(())
} else {
let bytes = (self.first_invisible * std::mem::size_of::<I>()) as u64;
let mut buffer = Buffer::new(
&allocator,
bytes,
vk::BufferUsageFlags::VERTEX_BUFFER,
vk_mem::MemoryUsage::CpuToGpu,
)?;
buffer.fill(allocator, &self.instances[0..self.first_invisible])?;
self.instancebuffer = Some(buffer);
Ok(())
}
}
fn draw(&self, logical_device: &ash::Device, commandbuffer: vk::CommandBuffer) {
if let Some(vertexbuffer) = &self.vertexbuffer {
if let Some(instancebuffer) = &self.instancebuffer {
if self.first_invisible > 0 {
unsafe {
logical_device.cmd_bind_vertex_buffers(
commandbuffer,
0,
&[vertexbuffer.buffer],
&[0],
);
logical_device.cmd_bind_vertex_buffers(
commandbuffer,
1,
&[instancebuffer.buffer],
&[0],
);
logical_device.cmd_draw(
commandbuffer,
self.vertexdata.len() as u32,
self.first_invisible as u32,
0,
0,
);
}
}
}
}
}
}
impl Model<[f32; 3], [f32; 6]> {
fn cube() -> Model<[f32; 3], [f32; 6]> {
let lbf = [-0.1, 0.1, 0.0]; //lbf: left-bottom-front
let lbb = [-0.1, 0.1, 0.1];
let ltf = [-0.1, -0.1, 0.0];
let ltb = [-0.1, -0.1, 0.1];
let rbf = [0.1, 0.1, 0.0];
let rbb = [0.1, 0.1, 0.1];
let rtf = [0.1, -0.1, 0.0];
let rtb = [0.1, -0.1, 0.1];
Model {
vertexdata: vec![
lbf, lbb, rbb, lbf, rbb, rbf, //bottom
ltf, rtb, ltb, ltf, rtf, rtb, //top
lbf, rtf, ltf, lbf, rbf, rtf, //front
lbb, ltb, rtb, lbb, rtb, rbb, //back
lbf, ltf, lbb, lbb, ltf, ltb, //left
rbf, rbb, rtf, rbb, rtb, rtf, //right
],
handle_to_index: std::collections::HashMap::new(),
handles: Vec::new(),
instances: Vec::new(),
first_invisible: 0,
next_handle: 0,
vertexbuffer: None,
instancebuffer: None,
}
}
}
struct Aetna {
window: winit::window::Window,
entry: ash::Entry,
instance: ash::Instance,
debug: std::mem::ManuallyDrop<DebugDongXi>,
surfaces: std::mem::ManuallyDrop<SurfaceDongXi>,
physical_device: vk::PhysicalDevice,
physical_device_properties: vk::PhysicalDeviceProperties,
physical_device_features: vk::PhysicalDeviceFeatures,
queue_families: QueueFamilies,
queues: Queues,
device: ash::Device,
swapchain: SwapchainDongXi,
renderpass: vk::RenderPass,
pipeline: Pipeline,
pools: Pools,
commandbuffers: Vec<vk::CommandBuffer>,
allocator: vk_mem::Allocator,
models: Vec<Model<[f32; 3], [f32; 6]>>,
}
impl Aetna {
fn init(window: winit::window::Window) -> Result<Aetna, Box<dyn std::error::Error>> {
let entry = ash::Entry::new()?;
let layer_names = vec!["VK_LAYER_KHRONOS_validation"];
let instance = init_instance(&entry, &layer_names)?;
let debug = DebugDongXi::init(&entry, &instance)?;
let surfaces = SurfaceDongXi::init(&window, &entry, &instance)?;
let (physical_device, physical_device_properties, physical_device_features) =
init_physical_device_and_properties(&instance)?;
let queue_families = QueueFamilies::init(&instance, physical_device, &surfaces)?;
let (logical_device, queues) =
init_device_and_queues(&instance, physical_device, &queue_families, &layer_names)?;
let mut swapchain = SwapchainDongXi::init(
&instance,
physical_device,
&logical_device,
&surfaces,
&queue_families,
&queues,
)?;
let renderpass = init_renderpass(
&logical_device,
physical_device,
swapchain.surface_format.format,
)?;
swapchain.create_framebuffers(&logical_device, renderpass)?;
let pipeline = Pipeline::init(&logical_device, &swapchain, &renderpass)?;
let pools = Pools::init(&logical_device, &queue_families)?;
let allocator_create_info = vk_mem::AllocatorCreateInfo {
physical_device,
device: logical_device.clone(),
instance: instance.clone(),
..Default::default()
};
let allocator = vk_mem::Allocator::new(&allocator_create_info)?;
let mut cube = Model::cube();
cube.insert_visibly([0.0, 0.0, 0.0, 1.0, 0.0, 0.0]);
cube.update_vertexbuffer(&allocator);
cube.update_instancebuffer(&allocator);
let models = vec![cube];
let commandbuffers =
create_commandbuffers(&logical_device, &pools, swapchain.amount_of_images)?;
fill_commandbuffers(
&commandbuffers,
&logical_device,
&renderpass,
&swapchain,
&pipeline,
&models,
)?;
Ok(Aetna {
window,
entry,
instance,
debug: std::mem::ManuallyDrop::new(debug),
surfaces: std::mem::ManuallyDrop::new(surfaces),
physical_device,
physical_device_properties,
physical_device_features,
queue_families,
queues,
device: logical_device,
swapchain,
renderpass,
pipeline,
pools,
commandbuffers,
allocator,
models,
})
}
}
impl Drop for Aetna {
fn drop(&mut self) {
unsafe {
self.device
.device_wait_idle()
.expect("something wrong while waiting");
for m in &self.models {
if let Some(vb) = &m.vertexbuffer {
self.allocator
.destroy_buffer(vb.buffer, &vb.allocation)
.expect("problem with buffer destruction");
}
if let Some(ib) = &m.instancebuffer {
self.allocator
.destroy_buffer(ib.buffer, &ib.allocation)
.expect("problem with buffer destruction");
}
}
self.allocator.destroy();
self.pools.cleanup(&self.device);
self.pipeline.cleanup(&self.device);
self.device.destroy_render_pass(self.renderpass, None);
self.swapchain.cleanup(&self.device);
self.device.destroy_device(None);
std::mem::ManuallyDrop::drop(&mut self.surfaces);
std::mem::ManuallyDrop::drop(&mut self.debug);
self.instance.destroy_instance(None)
};
}
}
Adding another two boxes is as simple as adding these two lines:
cube.insert_visibly([0.0, 0.25, 0.0, 0.6, 0.5, 0.0]);
cube.insert_visibly([0.0, 0.5, 0.0, 0.0, 0.5, 0.0]);
Do you also, like me, have difficulties to remember whether position or colour come first? Wouldn’t it be nicer to have a slightly more expressive
type than [f32; 6]
?
Indeed, we can use a better type:
struct InstanceData {
position: [f32; 3],
colour: [f32; 3],
}
Then we create our boxes this way:
cube.insert_visibly(InstanceData {
position: [0.0, 0.0, 0.0],
colour: [1.0, 0.0, 0.0],
});
cube.insert_visibly(InstanceData {
position: [0.0, 0.25, 0.0],
colour: [0.6, 0.5, 0.0],
});
cube.insert_visibly(InstanceData {
position: [0.0, 0.5, 0.0],
colour: [0.0, 0.5, 0.0],
});
and, in order to make the types fit, we replace every [f32; 6]
by InstanceData
. And that is enough.
Here it is crucial that the data is laid out in memory in exactly the order that our shader expects. Without any additional padding between different fields, for example. Occasionally it can become necessary to ensure this in the struct declaration.
In such cases, it can be helpful to lay everything out in memory just like C would:
#[repr(C)]
struct InstanceData {
position: [f32; 3],
colour: [f32; 3],
}
For our two [f32; 3]
s, it’s unnecessary, I think. But in order to allow for optimizations, no guarantees are given about the memory layout of
structs in Rust. Better use the #[repr(C)]
.
We could similarly deal with the vertex data. But [f32; 3]
is convenient enough for me.