Sunday, January 14, 2024

Month 2 - Week 1 - Fixing GController's Incorrect Macro Use

 This week consisted of a lot of work from the previous month. Some of it included more MSAA testing, finalizing changes, and last touches, but there were some new things. One in particular was a bug I had assigned myself to last month and haven't had the time to fix, at least until now. With MSAA implemented, I took a look at it again and decided I would work on getting it fully fixed this week. This bug is directly related to the GController, specifically a macro called `G_MAX_CONTROLLER_INDEX`. The problem with this macro was that it was being incorrectly used. In some cases, it was treated as the max count of controllers, but in others, it was treated as the max index for the controllers array. The name of the macro itself can cause confusion upon seeing how it's used in code, and I figured it would be a perfect bug to work on right after finishing my first major feature.


We can see in the issue that it had been sitting for over a year. Admittedly, this issue is rather small, and not a major problem. However, it has been a source of confusion. Due to the name it carries, users of the library could end up mistaking it as the max index for the array, when it's more a max count of controllers allowed. To approach the issue, I waited until the Wednesday meeting where I could talk with Lari, my mentor, and Colby, the discoverer and creator of the issue above. We discussed how the changes would have to be handled, what ways it could be fixed, and the possible problems that fall into editing GController itself.

The route we chose for this fix was a rename. Simple enough. However, we needed to correct parts of the code that were using the macro as a max index. The rename would also modify three macros: `G_MAX_CONTROLLER_INDEX`, `G_MAX_XBOX_CONTROLLER_INDEX`, and `G_MAX_XBOX_CONTROLLER_INDEX_XBOXONE`. For each of these macros, we will rename them so that `INDEX` is replaced with `COUNT`. We have to update these changes throughout the source code and ensure that they are being used properly. There are some problems with the changes being made, however.

Firstly, testing any changes to the code is going to be a process. Since I only have one controller, an Xbox One controller, I can't test the functionality of what's being edited. This testing has to be done by Lari, as he has four controllers that can be used. Secondly, we have to modify some logic in the code, primarily the array bound checking that is being done for `GetState` and `IsConnected`. These checks are currently using the old macros, and they're also using them incorrectly, checking one over the array length (due to zero-based indexing). Lastly, we have to ensure that it runs cross-platform. We can't do this through traditional means as we don't have the proper tooling for it. Our only workaround for this is the runners for GitLab, but they can't test physical controllers, so only the basics. This means every edit we do has to be thoroughly thought out and looked over before they are pushed. This is to ensure we don't indirectly introduce a bug to the codebase, only to repeat the process all over again.

So with the possible issues known, we approached multiple ways to fix them. One has already been mentioned, that being Lari testing the primary manual tests with controllers for Windows. This will cross out Windows and UWP from the list of platforms needing testing. Mac and Linux will have to be handled differently. For those platforms, we will focus more on ensuring code changes are light and efficient. The changes done for them have to be reviewed and have to be ensured to have no logical errors in them. It's not the best fix, but it's the best we have currently. Then, naturally, there are the runners, that will make sure the code compiles okay, and that nothing will slip through for a majority of other tests.

Another fix is the logic changes. I decided that after renaming the macros, I would introduce three new macros that have the old name, however, these macros will equal the max count by subtracting one for zero-based indexing. This means that max count exists for physical controller count, and max index is for array indexing. They are separated. Because of this, I can replace instances of index checking in the Win32, UWP, Mac, and Linux code with the new index macro. This will keep readability while also showing how both macros are intended to be used. 


After these changes are done, we can reflect them in the actual code itself.


Then we also update the needed logic checks to better fit the new index macros.


You can probably tell that since the index macros contain the same name as the old count macros, we don't have to replace the macro name, more the logic itself so it is more in line with expected functionality. If the controller index is equal to the max index macro now, it is on the last controller in the array, compared to before where it needs to be greater than or equal to the macro. There was also worry with the old macros since the index checks could be checking one over the array's size, possibly causing overflowing in the library code itself, which is no good. The replacements and logic changes here feel more grounded, and ensure that everything is being used for proper purpose.

With all of the changes done, the main thing to do now is await feedback, have manual testing done, and ensure that all of the code is correct by code reviewing. Once these are handled, then the bug fix can be merged, and I can move on to the next bug on the list. The general workflow for this was very easy, and I didn't find it all too hard to get into. After implementing my first feature, I understand much more about how this library functions, what each class does, and how the code is compiled and formed. After getting a grasp of everything needed, I find the bug fixing to be more of a cooldown, especially in comparison to the nearly 2,600 lines of code I added to tests alone for the MSAA implementation.

Sunday, December 17, 2023

Month 1 - Week 4 - Adding MSAA to DX11

 So, last week was spent primarily focusing on getting the setup process for Windows more fine-tuned for users to contribute towards the project. This was a rather important task, and a lot of time was spent getting everything done correctly, but last week was also the week that it was finally completed and merged. With that, this week was spent working on my first real feature. This feature is MSAA support for DX11. To implement this, it needs the whole package, unit testing, research, and actual implementation code to get working. A lot of time was poured into this, and a lot of research into MSAA in general. While I could explain the full workload that was taken for this, I will instead explain two problems I faced with the implementation, and what I did to figure it out and get it working properly.

One of the first steps was adding the values for MSAA to the allowed mask variable, but I also needed the sample count for whatever was passed. This process was fairly straightforward and provided little issue in the actual implementation.


The second step was going through and altering the swap chain description and all texture descriptions to reflect the multisampling. This was also fairly straightforward as shown above. However, it was the code after this that proved to be more problematic.


Now, this image shows the solution, but to get the fix for this was a little troublesome. To explain what this is doing, I need to break down what I think is happening and what resulted in the code above. When I was initially implementing MSAA, I was running into a problem where the stencil view was failing to create, this was a constant error and had no obvious source other than the stencil view being null. Through the internet and discussing the problem with my mentor, I came to realize that the stencil view may not have a large enough buffer to hold all the pixels needed for the multisampling to work correctly. I looked around online and found a simple fix. By changing the view dimension from D3D11_DSV_DIMENSION_TEXTURE2D to D3D11_DSV_DIMENSION_TEXTURE2DMS, we can create a buffer that can store the appropriate amount of pixels. All we need to do is check if the sample is greater than one, one being no MSAA, if it is, then we set the view dimension of the stencil view to the proper setting so the multisampling works. After everything is implemented here, we now have MSAA.

This, of course, looks over the immense amount of work that was invested into the tests. There were almost 3K+ lines of code added just for the tests to ensure that MSAA was working properly, and there were many issues that came along the way with it.


A good example of the test code needing problems fixed is the one above. There was a problem I encountered with the tests I was doing when trying to do a test for MSAA x16. Although x16 MSAA has been around for a long time, modern GPUs will sometimes still lack the support for this level of MSAA, so what happens in the test is it will try creating an x16 surface for DX11, and immediately fail. A failed unit test is something we don't want, especially when it's a hardware limitation. So the way around this is to check for it. Normally, the REQUIRE statement would contain the create function call, and then we'd check the return immediately to see if it is a success, however, this has to be changed. What has to be done now is call to the surface, attempt to create it, and then check if the result is first a HARDWARE_UNAVAILABLE error. If this error is returned, then we can simply skip it because this isn't the fault of the test itself, but rather the hardware. Now, if the hardware is supported, this return from create will not occur, and we can REQUIRE the result is a success and nothing else. This problem took me a long time to handle. I had to research, redesign, and approach how to fix it. On a quick glance, the fix was probably easy to spot, but for me, just learning how everything works, I spent much longer on it than I would like to admit.

With that, however, everything else flowed smoothly and the implementation was rather easy, minus all of the writing time required for it. It was interesting learning all the internals of the library to get this working, and ensuring both the desktop version and the UWP version worked. All there is to do now is wait and see how the code review goes, and if any corrections are needed to be made.

Sunday, December 10, 2023

Month 1 - Week 3 - Finishing Touches with Setup Process

 Last week I touched on the setup process and worked towards taking steps to make it better. To do this I made a bat script that ran a PowerShell script which installed Chocolatey, and then installed the NuGet package to work with CMake. This was done to avoid the issue of NuGet not being installed when the UWP CMake needed it for proper project generation. There were two catches with this approach. 1) To install Chocolatey, the user needs to be an administrator, and 2) the user has to be an administrator to even install the package as Chocolatey will force a prompt on the user if not. This prompt can be bypassed but locked on a thirty-second timer, even with arguments to attempt auto-accepting it.

Due to this issue in particular, I spent the early part of this week working on finding a solution to the problem. The reason for this being a problem is due to the runner for UWP and Windows compilation. Both runners for these builds can't perform input to bypass these prompts, so the prompt is either cut off from performing the build completely or has to wait thirty seconds for the prompt to auto-accept.

I don't have pictures for the iterations on the work put into this, but some of the iterations involved trying to bypass the prompts through PowerShell directly, and some involved completely remaking the scripts in an attempt to fix it, but eventually, I settled on a solution.

Chocolatey had to be replaced.

At first, this sounds like a large process. My requirements for a package manager are 1) it needs to allow no input, 2) it must allow running without an administrator, 3) must be able to install packages with no prompts, and 4) must be updated regularly so the version of the package isn't old. This search for the right package manager could've been long and difficult, but there is a package manager right now that is perfect for the task. It was a package manager I had been researching just a couple of days before encountering the issues I did with Chocolatey. This solution is WinGet.

As the image above explains, WinGet is a command line tool that is tightly integrated into Windows 10 and 11. This tool allows users to discover, install, upgrade, remove, and configure applications. This is the client that interfaces with the Windows Package Manager service. The big perk with this is it is not only already installed by Windows, but it doesn't prompt for administrator access unless the application requires it, which is much more preferable since NuGet doesn't need this to install. With this, I went to work on the code immediately.

The first step is updating the bat file. Originally, the file would open PowerShell, and then open another PowerShell within with the argument to open with a UAC prompt. This has been altered to now just open the PowerShell and execute the script without any extra work. This has the benefit of making the script look much more concise now.

The next part is to update the PowerShell script. The first thing needed is to test the `winget` command to see if it works. We can print the version number, and then check the exit code, this exit code will show whether or not it is. If it isn't, the user has to install the App Installer program from the Microsoft Store, which is side-loaded with `winget`. After this check is done, and if it passes, it will then use WinGet to install NuGet, using a couple arguments with it. The first argument `-e` is checking for packages with the exact name as requested, the second and third will auto-accept any package or source agreements, with the final forcing a direct run of the command and continuing with non-security related issues. This will ensure no prompts are given, and that the package will install smoothly for the user automatically. However, there is a final step.

WinGet, upon installation of the package, will require the shell to be completely restarted for the path environment variable to update. This is required for the NuGet process to be used and seen by the shell. But, the runner can't do this, and the user would need to run two scripts to make this efficient. To get around this, we can do some PowerShell magic. The image above shows us setting the environment variable `Path`, which is temporary for the instance to an updated path. We do this by accessing the system environment and getting the environment variable of the same name. This allows us, without restarting, to use NuGet freely throughout the setup.

With that, the runner is now able to do the setup script without any issues. This pull request was merged, and will hopefully make setting up on Windows easier, especially for UWP where this specific NuGet requirement was needed.

Next week, I will be focusing heavily on getting work done with another issue involving DirectX11 MSAA support and will be making posts for that as well.

Sunday, December 3, 2023

Month 1 - Week 2 - Project Setup Updated

I've started working on updating the setup process for Windows users developing on Gateware. Due to recent UWP issues that we were experiencing the previous week, we found that NuGet was required for the project and wasn't being handled appropriately when it wasn't found. I was assigned to start working on updating the script used to set the project up for use. The previous version of the script was a single bat file that would create needed directories and then call CMake to generate the projects.


The code for it was simple, but what was needed for the change was to convert this bat file into a PowerShell script. Specifically, the new process needed to get Chocolatey, install NuGet with it, and then perform the same operations as the bat file.

There were a couple issues with this, however. The first was the execution of the script. The script is stored in a ps1 file, this file is the default for PowerShell scripts, and to run this script can be tricky. Traditionally, PowerShell wants the script to be signed to run without further scripting, however, the signing process is confusing and a little strenuous to my knowledge, so I looked at the other route. This other route is setting the execution policy of the PowerShell instance to bypass. Doing this allows for the script to be run without needing the script to be signed. this was the route I ended up taking for this. The second issue that I ran into was for Chocolatey installation. Chocolatey requires that PowerShell is run as administrator to install itself correctly. This doesn't sound like an issue at first, but let's take a step back and visualize how a user would set up the project on the old and theoretical new.

If I wanted to generate the project back then, I could just start the bat file, wait for CMake to generate the projects, and then I would be ready to work. That's if I have NuGet installed and configured in the path beforehand. Overall though, NuGet configuring would only need to be done once, and then the process from there is quick for each following setup on that system.

Now on the new system, I would have to open PowerShell as administrator, and then call the command `Set-ExecutionPolicy Bypass -Scope Process -Force` before finally calling our script. This isn't just something that has to be done once either. This process would have to be done every time the user needs to set up the project. Of course, we want this done easily, so this is where I started problem-solving.


Firstly, apologies for the text being hard to read, since commands in bat can't be broken into multiple lines, it had to be written rather long.

Now, this image is of the bat file replacement I wrote. What this bat file does is execute an instance of PowerShell that will execute another instance of PowerShell, except this second instance will run as administrator and be fed two commands. One command is to set the location to the active directory used by the bat file, and the second is to execute our script. Pretty straightforward, but there were problems with this.

You may have already noted that two PowerShell instances to run one as administrator has a bit of code smell, and you would be right. However, when I attempted to use the much more straightforward `runas` command that the bat script already has, it seemed to have trouble executing the PowerShell process with any arguments fed to it. This is problematic as I want the bat to handle everything for the user besides the UAC prompt to run as administrator. The only workaround I know of is this solution I made above. It's not the best by any means, but it gets the script executed.


Now, moving to the PowerShell script itself, we had to do a couple things immediately. The first is ensuring the script is being run as administrator. The code for this is a little lengthy, but surprisingly readable all things considered. After this check, we then check an absolute path to see if Chocolatey is installed. The reason I'm using an absolute path here is that Chocolatey will install here always. I don't think Chocolatey lets you modify where it installs, however, if it does, we can update this down the road to support other paths. For now, it's using the default location. If the directory where Chocolatey installs doesn't exist, we call on the installer and install Chocolatey for the user. After this installation, we immediately install the required package, NuGet.


From here, the rest of the code is simply a translation of the old bat script to PowerShell. A lot of the code stayed the same, except for the use of `errorlevel` which was switched out with `$LASTEXITCODE` since the previous was specific to bat scripting and doesn't exist in PowerShell. 

Overall, the process of implementing this new system took around 2-3 hours and was relatively error-free besides the issues I mentioned. I had to take a little bit of time to design and ensure the developer running the bat file would have to interact as little as possible so that the process could be mostly automatic. I feel that the result is good and works well, but there is still the code review that will be needed for this to pass, as well as testing on another system to see if it works on other systems that are not my own. Once these pass through, the new Windows setup script should be merged and a part of the project.



Sunday, November 26, 2023

Month 1 - Week 1 - Introduction

As per FSO, my first blog post will be an introduction to myself, my journey, and my goals here. This will help others get to know more about me, the type of person I am, some of my personal achievements, and what I want to do during my time here.

So, my name is Alexander Cusaac. A little bit of history about myself. I started programming when I was just nine years old after finding an old book on Java my brother had--Java for Dummies if I remember. They had just started covering Java 8 at the time and I was hooked from there. From 9-13 years old I spent a long of time just writing small programs, making little IO programs, number guessing games, and messing with other languages like C#. It was around the time I was 14 I started understanding the more complex ideas of programming and started taking steps to really understand them. By the time I went into FSU (Full Sail University), I had made games, GUI software, extensions and mods, a simple OS, and more. Even made a compiler at one point just to understand and learn Rust--a rather unique and different programming language. As of right now, I've been programming for over 13 years, and plan to add many more years as I make programming my career.

I do have a GitHub if you want to look through any projects I have publically available. There's a decent bit to look through, but a lot of the projects are rather old and outside of my quality standards today.

To wrap this up, my goals. My goals with working on Gateware are to, hopefully, fix many issues that are currently on the backlog and generally work on bringing more stability to the project. I may work on adding new features to the project, but that won't be known until later in the following weeks/months. For now, I'm looking forward to fixing a lot of issues and getting the ball rolling on making the project more stable and sound for those using the library.

With that, this is my simple introduction. Happy to be aboard the project.

Saturday, July 15, 2023

Month 4-Week 3 of Gateware: The Build Process is Finally Complete!

)    Well, we’re finally here. The GSource UWP branch has been merged into main and the GCompiler UWP branch will be merged into the main here in the next few days. Gateware’s UWP implementation is finally being released into the world, and while it may not be the prettiest, it still feels good. 

    Last week, we had an issue with the Unit Tests weren't being waited on by the yml, and the yml could tell whether or not Unit Tests. Well since then, the unit test yml has grown considerably. 


    That top block of code will first check to see if the application is already installed (from a previous test or elsewhere), and if it is, then uninstall the app. It then goes through the normal steps that we have from last time, except now, if we find a certain file (I'll talk about the importance of this file later), and if it exists, then delete it. Once we've then got the application launched and running, we then startup a while loop that will look for that file from earlier, and if it can find the file in time, then it will pass; if not, then it fails.

    Now, you may be asking, "What's up with this file?". Well, since UWP applications aren't able to write out to the console, we instead have it write out to a file. If all of the Unit Tests pass, then the file will be created; if any of the tests fail, then the file is not created. Simple as that. At the time being, the output file, unfortunately, does not have any real information.  We would be able to one day redirect what would normally go to the console to go to an output file, and then we could just have the yml print out the file. But that will just take longer to do than the amount of time I have left here.
    
    And with that, the GSource UWP branch was pretty much ready to get merged into main. We still had a little bit of tidying to do, but once that was taken care of, we had the pipeline run one more time and then merged the branches, and the merge passed all the tests!

    Not, it was GCompiler's turn. GCompiler's whole process went much faster than GSource since the vast majority of the changes we had to make we stuff we had already done in GScource. For the build step, all we did just copy over the build step from Win32 and then add a few lines so that it would run the CMake to build the UWP version of the SinglerHeaderTestMain project. And speaking of which, we needed an actual CMake to create the UWP version of the project, but because of how similar it is to the GSource test project, I was pretty much able to just copy it over, get rid of the Dummy project, and then change a few names. 

    The test step is exactly like the test step from GSource, just with some slightly different pathing and project name. And since the test step was also looking for a success file, we just needed to add it to the main of the test project to create the file once all the Unit Tests pass. And with that done, GCompiler is ready to be merged into main. The pull request has been submitted, and it passes all the tests, so it will be merged anytime now.

    And with that, Gateware-UWP is gonna be launched to the public and will be ready to be used at any time. There are still some improvements that can be made, but for now, it is working; just could be working a little nicer. But that's it! There's just a tiny bit more cleaning I'm going to do, but those shouldn't really have any functionality changes and will be really quick to do. So other than that, I'm pretty much done with Gateware as far as my time as a student goes. For my final week, I'm going to be trying to add some more functionality to my showcase project to hopefully have it be more than just a spinning cube with some music.

Friday, July 7, 2023

Month 4-Week 2 of Gateware: The Build Process is (Almost) Done

     We are so close to having the build process finished. It has presented a fair share of issues and has forced me to learn a lot of new things that I didn't think I would have to when I first began this journey. While we do have the vast majority of the build and unit test process finished for the GSource branch, we're still missing what is arguably the most important part.

    We got the whole build and process set up so that we now have to project being created, built, deployed, installed, and launched. 


    The meat of what's going on is happening here. The creation of the project in the build phase is pretty standard. compared to the platforms, but the main differences are here in the testing portion. Firstly, we are navigating into the build folder so that we have access to the solution file. We then use Push-Location to save the directory that we are currently in. We do this because the next line of code opens up the Visual Studio Developer Powershell, and doing so changes our working directory to some other random directory. But we are then able to use Pop-Location to get us back into the directory we had saved. We then use the devenv command to deploy the project (it also rebuilds the project), which will generate the AppX folder. After this point, we thought we would then have to take the AppX folder and compress it into a .appx package and then use the package to install the application onto the machine. But through some testing, I discovered that the deploy command actually goes ahead and installs the application onto your machine. And so then we were able to skip those two other steps, which I'm very happy about since that install step was giving me a lot of trouble. Finally, the bottom command then goes ahead and launches the application.

    The problem we are having is that after the application is launched, the yml just ends there. On the other platforms, the Unit Test is a console application, so when it launches the executable, it opens in the command line, and it can't continue until the unit test completes. It is also then able to see whether or not all the unit tests have passed. But UWP isn't able to run as a console application, so the yml doesn't wait for it to finish. So right now, the unit tests only pass and fail based on whether or not the application is able to be launched, not on the individual unit tests. 

    So now we have to find a way to not only get the yml to wait for the application to go through all the unit tests but also be able to read the unit tests and determine when and which unit test fails. The Start-Process has several commands that are actually perfect for what we are looking for it to do, but unfortunately, they all seem to be incompatible with UWP applications. We are so close to the end; I can taste it. But this is looking to be one of the more major issues and could potentially take some time to complete.