Monday, December 14, 2020

The Tales of Vulkan on iOS: Simple mistakes

    At the end of last week, I had thought that I would be done with Vulkan for good. Turns out a simple mistake I made back when I ported GWindow got me stuck on Vulkan. To understand what got me stuck, we need to first learn about UIKit's UIViews and CALayers. A NSView, which macOS uses for content in windows, can optionally be what is called "layer-backed" and CALayers can be added to the View to get special rendering effects, like gradients, fades, and animation. Metal subclasses CALayer into CAMetalLayer to get their GPU accelerated rendering, and this is true for macOS and iOS. Where they diverge is how you actually get a CAMetalLayer onto the views. On macOS you can just add it at runtime as long as you are on the UIThread. iOS has more restrictions with this but we will get to that later. It is critical to know that to make a Vulkan surface for iOS you need to pass a UIView that has a CAMetalLayer attached or a CAMetalLayer instance itself.

    UIViews cannot be not layer-backed like NSViews can. This means that UIViews need to have their Layer defined under the "layerClass" method when you need to be different than the default CALayer. Now when I got Vulkan rendering before, I did just this, but I soon found that we have an issue with that. Like Ozzie, I ran into the duplicate symbols issue when trying to link to the Vulkan Surface file. This had not happened when I had the UIView subclass in a different file. This usually wouldn't be such a big deal, just use the Objective-C runtime library to get around the file-per-implementation rule. Altough this could be done, I could not find a way for Vulkan to detect the Metal support I was implementing at runtime. I even tried changing the methods of the meta-class of the subclassed UIView with no luck. Vulkan just would not recognize the Metal support.

    This is where I had started to panic. I had effectively 5 days until my presentation and I did not have Vulkan rendering anymore. I started to look for alternatives to passing the UIView to Vulkan and found that you could just pass the CAMetalLayer instance itself. I tried casting the CALayer that was attached to the runtime view, but even that would recognize as not Metal compatible by Vulkan. I had run out of ideas and looked for a alternative to subclassing UIView altogether. It was lucky that I started to do this, since I found a subclass of UIView that (presumably) was not created using the Objective-C Runtime Library. This class was MTKView, which stands for Metal Kit View. You can create an instance of this class and it will have a CAMetalLayer attached by default. This saved my skin, as it made the Vulkan surface creation method happy. But there was one other problem, the view would only render the top left quadrant of the screen. I thought that it was back to the lab with this so I went back.
    
    I tried adding a sublayer to the default UIView and rendering with that, but that didn't work and made the problem worse, even though it did render. Finally I had to call for some help. I first got Ozzie in a call, to see if I was doing anything wrong with the Objective-C runtime, but it turns out I made the classes perfectly (somehow) and it really was Vulkan not recognizing the View. Then I called Lari, the Graphics man himself. He instantly recognized the issue as a viewport scaling issue. We got to work messing with the scaling options with the sublayer implementation. That ended up not working too well, so we went back to the MTKView implementation. Here is where we made a big discovery. iOS uses an odd method of getting pixel data. They use a width and height of pixels, but it is a little different with retina-enabled devices. They also use a "Scale Factor" property. This is different per-device, usually the larger the device, the larger the scaling factor. The iPhone 8 has a scaling factor of 2 while an iPhone 6s+ has a scaling factor of 3. Older phones like the iPhone 3 and 4 have a factor of 1. 

    With all of this information we finally came to the conclusion that the View and Layers all had the correct scaling and resolution applied. But there was one object that did not, the Vulkan Swapchain. I did not mention that neither Vulkan, nor Metal had any errors when rendering. This is because the Vulkan swapchain was half the resolution it should have been. By simply doubling the width and height of the swapchain, we fixed the top left corner rendering issue. The problem was actually in GWindow all along. It did not account for the scaling of retina devices. Once I resolved that mistake the Vulkan Surface worked like a charm. 

References:

UIKit

MetalKit

MoltenVK

UIView

MTKView

NSView

CALayer

CAMetalLayer

Objective-C Runtime Library

Monday, December 7, 2020

Vulkan on iOS part 2

 After being able to link Vulkan libraries in our Xcode iOS project, we are finally able to write Vulkan code for iOS. To use Vulkan with iOS, we need to give Vulkan the pixel buffer area for the screen. Usually this is done through something like HWND for DirectX, but with iOS we cannot get that low to a raw window pointer. So Vulkan asks for a UIView pointer, but not just any UIView. The one that you give to Vulkan must have a backing layer if CAMetalLayer or something that derives from CAMetalLayer. In MacOS this is an easy thing to achieve, just grab the NSWindow and say that it's backing layer is Metal, but in iOS, it gets a bit messy.

 iOS UIviews are layer-backed, meaning that they are given a layer upon creation and you absolutely cannot change it after (unless you write a custom View controller). This creates a conflict for Gateware. We want the user to be able to create a UIView themselves if they wish and still be able to use Vulkan, but we cannot change what they have specified. It is possible to run 2 controllers at the same time, or even just 2 views on the same controller. This solution has its drawbacks too. Since Gateware is usually used for games, this extra view or controller would be unecessarily using system resources. There must be a solution where the user can create a window but have Gateware put Vulkan into it.

 This is where I had an idea, if we cannot change the current view, can we change the current view controller? The answer was simple, as long as you present the view once you are done swapping them, the newly created view would be the one that is visible, and on top. So I had decided to create a Gateware View and View controller that had support for Metal. Next was just to detect whether or not the users View was Metal capable and overwrite it if not. This allows easy access for people wanting to develop Vulkan for their games and a level of control for people that already have a game looking to use Gateware.

  I am glad to say that the effort paid off. After fiddling around with some of the desktop Unit Tests, (and some incorrect file IO pathing) I was able to run them on iOS Simulator and an iPhone 7 running iOS 13. It feels nice to finally be done with Vulkan and to move on to Audio.

 References:

MoltenVK 

https://github.com/KhronosGroup/MoltenVK

UIKit UIViewController 

 https://developer.apple.com/documentation/uikit/uiviewcontroller

UIKit UIView 

 https://developer.apple.com/documentation/uikit/uiview

Metal CAMetalLayer 

https://developer.apple.com/documentation/quartzcore/cametallayer

Monday, November 30, 2020

Getting 3D Graphics on modern iOS without Metal

    Apple deprecated OpenGL in 2018 with iOS 12. Apple recommends using their graphics API, Metal. The issue with this is that not all cross-platform developers have the resources to port their game to Metal, instead choosing to use a portable API like OpenGL. Gateware likes to support as many platforms as possible, and although we plan to support Metal in the future, I don't have the time to design and implement a Metal Surface in the time I will be working on Gateware. What options does an iOS developer have for 3D Graphics? Well, there are 3 options, Metal, which is not viable at the moment, OpenGL, which isn't viable either, since I would have to have apps run exclusively on iOS 11.4 or earlier. The only option left is running Vulkan. If you didn't know, you absolutely can run Vulkan on iOS, now this isn't like other devices, as iOS doesn't officially support Vulkan like they support Metal. You can run Vulkan code through a translation layer. MoltenVK acts as this translation layer. MoltenVK takes Vulkan code and translates it to Metal code, so to the iOS device, it sees native Metal code.
    This solution of using Vulkan is good in my case, since we already have a Vulkan Library that I can port, and the process of porting is not all that complex since Vulkan was made to be portable. Most of the code should be able to be ported copy-and-paste style. This will save a lot of time since I only have a few more weeks of working on Gateware, and I would like to have 3D graphics for my presentation. Now getting MoltenVK onto a cross-platform project isn't as straight-forward as I would like, but it isn't too complex. We use Vulkan for macOS in much the same way so my first thought was to copy the process to link to MoltenVK on macOS to the iOS target, but this is unfortunately not possible. Since iOS doesn't have support for Vulkan, you cannot just place Vulkan code in an app and expect it to work. You need to also package MoltenVK in your app bundle for the Vulkan code to work properly. 
    The next step was to get MoltenVK for iOS. I had thought that you could reuse the macOS files for this, but that was incorrect. LunarG provides macOS files for MoltenVK, but I needed iOS files, so I went to the Github repository to build the library myself. This turned out to be a big waste of time. Although I did build the surprisingly easy to build the library, it takes quite a long time, around 10 hours on my old Macbook for both iOS and iOS Simulator. On top of long build times, it turns out that the files that LunarG provides actually contain the .dylib and .xcframework files for iOS and iOS Simulator. This revelation made me realize I spent 3 days building a library that I already had built 3 days earlier.
    Now that I had the files that I needed, I could get to properly getting MoltenVK into the iOS app bundle. This is where I am running into issues. For macOS, we are looking into the default location for libraries and linking to a .dylib file. Although possible with an iOS project, this isn't ideal since there is no easy way to ensure that the user's iOS device has MoltenVK installed. As such, I need to link another way. This is where the lovely documentation for MoltenVK shines. They have 2 ways to link that would work. I could link to the .dylib file and copy it to the app bundle and change 8 paths in build settings for that target, or I could copy the .xcframework file and change a single build setting. I opted for the .xcframework route for simplicity. From there I could run Vulkan code. 
    At the moment I am working on making a CMake script that automates this process for future developers on Gateware working on iOS. I am having a bit of trouble but that will be it for this post. 


References:

MoltenVK Github Repo

LunarG MoltenVK Download
https://vulkan.lunarg.com/doc/sdk/1.1.130.0/mac/via.html

Monday, November 16, 2020

NSBundle or the C++ way?

 While developing for GFile, I found that you could use both C++ and NSBundle for accessing data from the app bundle for iOS. I wanted to know which way as better for the architecture we are using. As such I searched for which way was more widely used, as well as best practices for iOS, and finally, what would fit into how Gateware works.

The first thing I noted from this research was that Apple recommends that you use NSBundle for accessing files in the app bundle. Another thing going for using NSBundle is the ability to not need the file path to the file that you want, you can also find your file using it's filename, assuming that there are not multiple files with the same name. These benefits were appealing when deciding which FileIO system to use. There were also drawbacks to this approach. Namely a memory management one. NSBundle can dynamically allocate memory for you if it finds that it needs more memory for searching the bundle for your assets. Another drawbacks to consider, although it doesn't have to do with NSBundle itself, is that Gateware already has an interface that accepts filepathing as the main form of traversal through a file system, not searching.

There are other options to consider though. C and C++ style FileIO is also possible on iOS, albeit not recommended by Apple themselves. Because of this I could make minimal tweaks to the existing codebase and reuse most of the Mac implementation for iOS. While this is the most simple solution it comes with its own issues. Firstly Apple does not recommend using app bundles in this way. Next the C and C++ ways may not be available in future releases of iOS, as some employees have hinted at in developer forums. 

In the end I decided to go with the C++ way of FileIO for iOS. That way I could spend more time in other libraries, and so I wouldn't have to change something that mostly already worked. I had to change a setting in XCode for how it creates the bundle to avoid a crash using the C++ way, but other than that the code worked. This solution ended up being good for more than time. It also helped gain some precious mobile memory and kept Gateware away from using search-based FileIO and kept me from writing code based on whether or not a file was in the bundle.

Finally "Fixing" Suspend and Resume

     Since last time, I have ported over DX12 to UWP and the suspend/resume issue has been "resolved". The reason that is in quotation will be explained shortly.

The Problem:

    Turns out I found a slight bug/feature in Gateware's GEventReceiver interface. If a GEventGenerator is pushing events from multiple threads, events can be missed. That was the issue I was receiving for my suspend/resume. My rendering loop was not resuming because I was not getting the event to my receiver as GWindow in UWP pushes from 2 different threads. 

The Solution:

    The solution was to just use GEventQueue instead of GEventReceiver. With a queue, there are no missed events. The queue is popped from in a while loop until the queue is empty. While this is a fix for my problem, this uncovered a slight issue with GEventReceiver which Lari is currently looking into fixing. As of writing this, GEventReceiver only stores one message at a time and in non-blocking, which is safe from a threading perspective but not ideal if a user is expecting every message to go through, like I was.

    Attached gif is me suspending and resuming my demo scene, I am using an event queue to capture events from GWindow to know when I am suspended or not. I use this to flip a bool to stop the rendering loop.



     

Thursday, November 12, 2020

BlueScreen of Death

 Not that blue screen. 

BlueScreen is one of the templates that we maintain for users of Gateware to learn from or build off of. It's a cross platform implementation of an OpenGL renderer. It uses Gateware libraries to create a window, and then render a colored triangle inside it. It runs on Linux and Windows. It's also supposed to change color when you resize the window. For some reason, on Linux only, the resize event wasn't being received properly and so the lambda passed into the create function never gets hit.

I found this issue when updating and testing the 1.2 release candidate on all the templates on all platforms and so I was assigned to fix it. I spent most of last Thursday, Friday, Monday, and Tuesday only trying to solve this problem. 

First of all, the main hurdle was trying to debug using Codelite on Linux. For some reason, if you build the project in CMake, you'll have no debugging unless you specify that you want it. To solve this issue, I had to copy over the the LinuxSetup script from the devops folder in the main Gateware dev repo and modify it to work in a different directory.

Once that was sorted, I got to work. I set breakpoints everywhere I could think of that might be useful. No luck. I put assert statements everywhere I could think of. No luck. I added print statements for debug information. No luck. 

Countless hours of research into X window and X11 on Linux, with nothing to show for it. Wednesday comes around and we have a release team meeting (that I missed, unfortunately). Ozzie and Lari worked on it and Lari came up with a very clever way to debug the problem. He put print statement in the lambda, then put a return at the beginning of the Create() function, rand the program, then moved the return down a few lines and tested again. They knew they found the line causing the problem once it stopped printing from the lambda.

Apparently, the issue was in X11. OpenGL was overriding settings that GWindow originally sets up and some of those settings had to do with whether the event got received. 

Definitely not an obvious fix.

Monday, November 9, 2020

The iOS App Bundle

 Last week I have been tasked with porting GFile to iOS. As part of file IO, GFile needs to be able to get the content out of the App Bundle that ships with the app executable. The end user would place all of their assets into this Bundle and it would be accessable for reading only from that Bundle when the app is released on the app store.

I ran into a bit of an issue with this, I did not know how to get GFile to read the information from the App Bundle. So I read up on Bundle documentation and Apple suggests using the NSBundle class to grab the info. At first I had thought to use the Apple recommended method, so I looked into using some kind of search method to see if the current directory was in the bundle itself. After seeing this was a difficult task, I looked into using C++ functions to read the data. I found that you can use plain C functions to read the data from the bundle, assuming that you can get the path to the bundle. I tested this with the C++ equivalent code, and it worked nearly-perfectly. I had an exception thrown when you tried to access a file that did not exist in the bundle due to a change made in iOS 13. 

I then researched on this change in iOS 13 and it came down to a setting in XCode that was not set in CMake which would usually be set in a new iOS project. I set the setting and the code works like intended. I will be moving onto GLog, which should be a fairly simple library to port, and then onto GWindow in hopes to get some graphics running in the future.

 

 

GController Woes

    Since last time, I have finished the DirectX 11 port. It works great, and I even ported over a previous Gateware project I had from a few months ago with very little effort. I had to get rid of all the input detection I had as it was using functions that just aren't available on UWP. I honestly surprised myself with how quickly it came together. All the old Gateware calls I originally wrote 6-7 months ago for a different platform worked on UWP thanks to the work I have done thus far. With that out of the way, on to GController I went.


Problem:

    The initial plan was to just get the Xinput library implemented for Win32 to work. From the research that was done, it should have been possible to use that code as is and just have it work. It would compile just fine, but it would not sense any input being sent in. As hard as I tried it just would not work for me. Time is running short before graduation so I can't allow myself to get stuck on any one thing for too long. I still have a minor issue with suspend and resume I need to address.

Solution:

    The solution to this problem turned out to just be rewriting a good portion of the library using WinRT APIs. This took two days to get working fully, my whole weekend. But now that I have Xbox One controllers working on UWP, getting general controller support will very easy. Getting general controller support on UWP working could also bring it to the Win32 platform as the APIs can also be used there. That would allow Win32 to finally have PS4 controller support.

To Do:

  • Fix the Suspend/Resume issue.
  • Get general controller support.
  • Merge development branch into App development branch to get up to date fixes for libraries.
  • DirectX 12

Thursday, November 5, 2020

I Broke Vulkan

 So now that I've finished cleaning up all the warnings on Windows, Linux, and Mac, we're getting pretty close to a new version release. Part of that process is merging branches back into the main development branch. I merged my warnings branch back in and set to work testing and updating the templates to the new version.

The first project I opened was RedScreen, our Vulkan template. It built and rand fine but then if you tried to resize the window or maximize it, this happened:



Good ol' Vulkan puking up errors that nobody can understand. 

Basically, what it's saying is that the swap chain needs to be recreated from scratch every time you resize the frame buffer (the window), but it wasn't happening.

Testing with the previous version of Gateware.h showed that it worked fine there, so clearly somebody screwed things up recently. Lari initially thought he broke it with a fix that he added earlier in the day, but I tested the commit right before his, and the problem persisted. 


So now I'm searching through commit after commit, trying to find all the instances where the Vulkan core is modified. And lo and behold!


Culprit located. In my very first commit to the gateware repo, I found this line with an unused return value. I seem to have brain farted, because instead of just removing the assignment, I removed the entire line. We need GetSurfaceData to actually set the proper variables when the window/surface is resized. This problem was there for over a month, it just wasn't caught until now because we don't have unit tests that test if the Vulkan surface is successfully re-sizable. 

So I added GetSurfaceData back in, tested RedScreen again and...


All is well in the Vulkan template again.

Wednesday, November 4, 2020

Apple's Uncanny Entry Point

     The week before last I had encountered a problem, one that I believe very few developers, if any, have tried to solve. To understand the problem we must first understand the goal. First Gateware code is meant to be compatible with 3 (currently) platforms, Windows, Linux, and macOS. Over the last few months, Chase and I have been porting Gateware to less desktop based environments, Windows UWP for him, and iOS for me. There is a subtle difference between desktop and app ecosystems that most developers don't think of off the top of their heads; entry points. Entry points serve as the beginning of any program, and most (if not all) programs need an entry point to start out. Windows (Win32), Linux, and macOS all use some form of

 int main(int argc, char** argv); 

for an entry point, but things get a little funky with app based programs.

    UWP and, if you configure it correctly,  Win32 take a different function as an entry point

 int WINAPI wWinMain(...);  

and while this may be convenient for developers of those platforms to get the extra information from the parameters of this function, it makes the life of a cross-platform developer much more difficult. To account for this change in entry, we made GApp, a class that would define whatever entry point was required on a platform, and call the usual main that the user would write. This keeps the code simple as the users of Gateware would only have to write one main for every platform. For UWP this was pretty simple, you take main as a parameter to a global constructor and run it from there, making sure to define WinMain at the same time. For iOS, this is a different story.  

    iOS does traditional main, so there's no issue right? Well there is a caveat with that. To get the storyboard to render, one must call the

 int UIApplicationMain(int argc, char * _Nullable *argv, NSString *principalClassName, NSString *delegateClassName);  

function. Well then just call this function in GApp and we're good right? Nope. Not only must this function be called on the main operating thread, it must also be called in main, or in any function that main calls (assuming that it isn't on another thread). Well, this is an issue, how can we call this function and run the code that the user supplies? You cannot multithread this function on the same thread, as it overrides regular main and as such it will never run the regular main code. And you cannot dispatch this function, as dispatch will favor UIApplicationMain over traditional main and traditional main will be put on the back burner.

    So is that it? Is there any way to have main run on the same thread put asynchronous to UIApplicationMain? I do not know. But there is an unexpected solution. With UWP, GApp defines the main entry point and calls the user-defined main later, so why can't we do the same with iOS? So we defined the traditional main entry point, and later in the file, did this

 #define main Gateware_main  
and it worked. As with the UWP GApp, the iOS GApp constructor takes the main function as a parameter and calls it later, but this time the main function that the user writes is not called, as it is now recognized by the compiler as "Gateware_main" and as such doesn't run it. Instead the compiler will see the GApp main as the entry point and run that. So we can have both functions. The GApp main handles everything for the iOS side, and then later calls the users main so that their functionality still works. This works as UIApplicationMain does require to be on the main thread and called by main, so that condition is checked, and there are none for the users main, other than it is expected to be the start of the program, so we run it as soon as the app is done launching.

     

Monday, November 2, 2020

3 Months with an Undiscovered Bug

     Since my last blog post, GWindow has finally been finished. Well maybe not finished, but it is in a very good place so that I can start porting other important libraries so someone could begin to make a game for Xbox. This was a decision that had to be made. I can't spend the rest of my last month on this project getting one library to 100% when other ones are still sitting 0. With GWindow mostly out of the way, I began to work on the DirectX 11 port. This is where I found the bug that has been there for 3 months and I only now discovered it.

The Problem:

    When I went to use the macro IID_PPV_ARGS, I kept receiving the error that it was undefined, when I know for a fact that it is. This led me down the rabbit hole of trying to figure out why this was the way it was. After a little research and cross referencing other C++/WinRT projects, I came to find out that C++/WinRT overrides a function that the macro is representing. This override is in base.h, which is generated at runtime with  C++/WinRT. Turns out that this whole time, my UWP Gateware project was never generating any of the header files it needs for the language projection from C++ to WinRT APIs. How was it even running then you ask, well it was using the WinRT header files included in the Windows 10 SDK, which are from the first iteration of C++/WinRT and that base.h did not have that override function I needed for the macro.

The Solution:

    Lari had the idea to check the the .sln and .vcxproj files to see if anything in those differ from what was in the projects that I know work. Eureka! There was a discrepancy in the .vcxproj file. It did not include references to C++/WinRT while the working projects did. I needed to come up with a way to get those in there automatically without having me and future devs go in there manually and fix this. What I ended up doing was creating a PowerShell script that takes in the file after CMake compiles it and edits it. I tried my best to write it in a way that if more files are put into the project that my script will not need to be rewritten. Here is that script: 


     This script adds back in the references to the .vcxproj file and allows the project to generate the the header files needed. In the end, the macro worked again and I can now continue my DirectX 11 port. Speaking of the DirectX port it is coming along great so far, minus a few threading issues. I'll save that for next week when its done.

Friday, October 30, 2020

Apple Heard You Like Braces

    After cleaning up all of the user-visible warnings on Linux earlier this week, I moved on to doing the same for Mac. For the uninitiated, Apple uses a custom version of the Clang compiler. This compiler is very picky about how you should write your code, and it will throw all kinds of warnings at you. In this case, I was getting something like 400 instances of -Wmissing-braces.

    All of these warnings came from one file... the implementation for GCollision. GCollision is very large; some 13,000+ lines. GCollision also uses lines, vectors, and matrices that are defined in GMath like this:


The union is just there to let us access the two vectors (points) with an index if we wanted to loop or something like that. Using a GLINE is pretty straightforward. Here is an example of a declaration of a GLINE within some function: 

    The first line in the function sets data in the first vector of the line. The second line does the same for the second vector. All is well, right? 

    Not according to Apple:

    "Suggest braces around initialization of subobject"

    Apparently, this is caused by the way we declared our line/vector/matrix classes. Apple clang treats the class itself as one scope, the union as another scope, and the struct within the union as a third scope. 

    The solution to this warning is to add braces for each scope. So for our GLINE here, we have to add them for the scopes of the line class itself, and then also for the vector classes within the final scope.

    It looks like this...

    ...and it hurts me.


    After figuring out the solution, I now had to add braces like this to all ~400 instances of this warning in GCollision, by hand, on a remote Mac three timezones away.

    In the end it took me about four and a half hours.


    Why, Apple? Why did you make me do this? *cries in braces*

Monday, October 19, 2020

The Sandbox environment in iOS

     Over the last week I have been porting GFile to iOS. As such I have been getting used to the environment that iOS places app developers in. iOS apps only allow developers to access certain directories when doing file I/O. Any directory accessed outside of these specified locations will give you a "permission denied" error. The app needs to be given access to directories outside of it's "sandbox" by the user, and even then you still do not have access to the whole device. With these restrictions I need to conform the GFile implementation as to not access files that the operating system does not allow to be accessed.

    I first started out the GFile implementation by copying what was done on Mac, and mold it into something that would work on iOS. At the moment I have some of the functionality working on iOS, but not all of it. First thing was to get opening a file working, this was as simple as checking if the file was in the sandbox and give it to the user, if the file was not in the sandbox then I would return an error stating that the user did not have permissions to access the file.

    Another thing to consider was the directories that the user has access to, specifically giving those directories names. After a long meeting with the other App porting developer, we had come up with a few names for Gateware to use that are not platform-dependent like the names used by Apple and Microsoft. Here are some examples of directory name changes:

Library -- UserFolder

tmp -- TempFolder 

Library/Caches -- CacheFolder

    Overall the GFile port has been going smoothly, I hope to complete this by Wednesday. 

Porting GWindow to UWP

    Porting GWindow seemed like the obvious next step to take after porting GFile. GWindow is very important to Gateware and is essential for the library. Getting it to work on UWP and have parity with the rest of the Gateware platforms is crucial. This will be a short update as the GWindow port is still ongoing with many other things that will need to be done, such as the unit tests.

The Problem:

    I knew going in that GWindow was going to be a handful, what I didn't expect was for most of the library to be deemed unsupported by the UWP platform. Most of the imported stuff like WndProc is inaccessible on UWP because of the sandboxing that happens within CoreWindow, that makes functions such as ProcessWindowEvents useless. This left me with the question of "What is GWindow going to be used for in UWP?". 

The Solution:

    I beat my head against a metaphorical brick wall for about 3 days trying to figure this out before I just decided to only do the functions that made sense for UWP. The purpose of each implementation of the GWindow interface is to provide functionality for that platform, so that is what I did. Only a few functions made the cut, such as GetClientWidth, GetClientHeight, and GetWindowHandle.

    Figuring that out was the easy part, it was the revelation that to get those functions working, I would need access to the CoreWindow. This is made difficult by the fact it needs to be accessed from the main UI thread. This was solved by these two functions:

    Calling the second function, allows you to just pass a lambda expression that contains what you would like to happen on the UI thread. This is where I would get access to the CoreWindow and do the actions that I needed. 

To Be Done:

    Currently, the GetWindowHandle function is still not working. That is down to the return value that the function has, 'void*'. As of right now, I have not successfully gotten CoreWindow to cast to it correctly.

Monday, October 12, 2020

Rectifying XCode thinking an iOS project is a 32-Bit MacOS project

     I was planning on porting GFile to iOS, but my plans were derailed by XCode. Last week I had been trying to get XCode to recognize two separate targets in one project. I had achieved that task, but I had one other issue. XCode wanted to compile the iOS target as a 32-bit Mac executable. As such the symbols for iOS were not recognized and the build failed. And so I was stuck once again. The Unit Tests for Gateware's pure C++ code, which should run on anything that can compile C++11, was not even compiling due to this error. 

    Most of the compilation errors were from mismatching architectures. As such I resolved to fix this by creating a new iOS project via XCode and looking at what build settings were enabled by default, and matching those settings on the Gateware iOS target. After a lot of fiddling with the settings I had gotten the compilation errors down to just 6 parsing errors. These errors were just from the iOS specific classes. I had to change one final setting to resolve this error, the "hide Clang debug symbols" setting. 

    Once all of the settings were in order, I had one final task to complete to get the unit test app running in simulator. The "info.plist" file tells the iDevice what information to give to the installed app. I had CMake setup the app bundle name and version through a .plist template that CMake provides. After setting these two entries were filled, the app ran on simulator, and the unit tests for the Gateware Core and Math libraries ran and passed. 

    I hope to get more libraries ported over to iOS and this step in the process will help immensely with development and testing of the libraries.



Getting Gateware Unit Tests to Run on Xbox One

    Since last Wednesday when I wrote my last blog, I have got quite a bit done. The GFile port is officially done with all new GFile functionality working on all platforms. The issue that I found on Mac last week turned out to be a new bug that only seems to appear on MacBooks, very strange.  After that, I finally got the chance to merge with GApp, the class that Lari wrote for all future mobile ports of Gateware. There was only a few conflicts to resolve and a tweak to get it working perfectly. I finished all that before the end of the day on Friday and figured it was about time I got around to actually putting Gateware on an Xbox, the main reason of bringing Gateware to UWP.

The Process:

    The steps required to get an Xbox One or any other variant of the Xbox lineup into developer mode is quite simple. The Xbox One that I have currently was lent to me to work on and it already had dev mode enabled, but for the sake of knowing how it is done, I started from scratch. I deactivated it and went from there.


    There are very comprehensive guides that can be found on Microsoft's website. I followed them as long as I could until I ran into an issue. As of writing this, there seems to be a bug with the dev mode app that can be downloaded from the Windows Store. It seems to allows hang on this screen, which is a big problem if you want to start developing on the console.



    After looking around for a solution, I found an alternate solution as to how to get into dev mode. This solution requires hitting all four shoulder buttons on the controller at the same time in the System Console Info section in the settings app.


    Once all 4 buttons have been pressed the option shown above appears, pressing that will bring up the dev mode conformation screen.


    Pressing Continue, the console will appear to reboot and after about 5 or so minutes, the Xbox will boot up into dev mode.


    The next step in the process is getting Visual Studios setup properly to connect with the Xbox. Instead of me just rehashing what is already explained really well in a Microsoft document, I will just link it here. In short though, it is as simple as typing in the correct IP address in the properties tab of the project.

The Result:



WARNING: fixing warnings...

Week 1

    So I'm new but I've been here for a couple of weeks already. I just forgot to do the first weeks blog post. Super professional, I know. 

    In my defense, there really wasn't anything too interesting going on. I was just trying to get dev environments set up on the three major platforms (Windows, Mac, Linux). The only real problem I had was during the installation of Linux.

    I was trying to set up a dual boot situation on my main Windows machine so I would be able to boot into either. The Gateware repo has a tutorial on the subject which I followed. Everything started off just fine. I was able to boot into the Linux Mint installation environment from a USB drive and then create a new partition on one of my SSD's. The problem arose when I went to perform the actual install of Linux. 

    The installer is supposed to recognize your Windows installation and then give you an option to "install alongside Windows" but for whatever reason, mine didn't. I had to do everything manually. This meant I had to designate parts of the new partition as /root and /home and pick their sizes/formats by hand. Thankfully there's plenty of online materials and tutorials that tell you how big and what format these things should be. The only problem was a suspicious dropdown box at the bottom of the installer that I didn't pay attention to. This box is for telling the installer where to put the boot sector.

    It defaulted to my Windows drive...

    So, as you can probably surmise, I ended up overwriting my Windows boot sector and loosing access to my Windows install. All of my files and data were still there, I just couldn't boot into Windows to access them. 

    I didn't realize exactly what happened at first so I thought I could just boot into Linux, delete the boot sector, and then have my Windows back, good as new. So I did that and now I had no access to any operating systems at all. 

    The solution to this, was to go on another computer, create a Windows installation media USB drive, boot off of that, access the command prompt, and tell it to reinstall the Windows boot sector. Thankfully it worked, and I was able to begin the process again and properly select a different drive for the Linux boot sector. 

    And that's pretty much all the excitement for the first week. Everything else was just, "does this compile and run? Ok, cool. Moving on."

Week 2

    Now that I think about it, week one was way more interesting than week two. The first half of the week was just experimenting with the end-user side of Gateware to get a better understanding. The second half was just searching through warnings on Windows and fixing the ones that affect the end-user.

    That's pretty much it. 

Wednesday, October 7, 2020

Porting GFile to UWP

The Task:

    About two weeks ago, I ported the audio libraries over to UWP. Soon after I got the unit tests all passing I realized that the auditory output did not match what was heard in the win32 version of the Gateware unit tests. This was cause for concern but I could not continue to spend too much time on one library, so it was decided that I was to move on to another and mark the audio difference as a bug to come back to later. I decided to begin the porting of GFile as I had to do a little bit of file I/O work in the audio unit tests and it was fresh in my mind.

The Problem:

    My initial thoughts on the GFile port were ones of ease. While this did turn out to be the case, it did not seem like that at first. Everything I read up to that point in my research suggested that the code that was already written for the Win32 implementation of GFile should work just fine for UWP. But for the life of me, I could not get it to work. This led me to the conclusion that I should rewrite the entire GFile UWP implementation using WinRT API calls. This caused many problems in the end, mostly with functionality differences with the other platforms.

The Solution:

    It was this difference in functionality that would drive me back to looking at the code that was already there from Win32. It was at that point that I discovered why GFile was failing at writing files out to the directory that I had chosen, the install location. I found out that the install location of a UWP app is read-only, which is fine for assets and resources that are only meant to be read, so I made the change to writing to the app folders instead which are both read and write capable. That was the change that got GFile working perfectly using the Win32 code.

    That wasn't the end for GFile though. Severe code branching was happening in the unit tests, specifically for audio to help differentiate between the many platforms Gateware can run on. Imagine what is pictured to the right but for every sound requested for the audio unit tests, it was happening almost 90 times.  It wasn't that this code didn't work, it was just messy and not really sustainable as more and more platforms are eventually added to Gateware down the road, such as iOS and Android.

    This brought up the proposal of adding additional functionality to GFile to get the different accessible file locations for the different platforms. This is mostly for the addition of the mobile platforms as this has little application to the ones on desktop. Lari sat down with Jacob, the iOS dev on the project, and I last Friday and we came up with a solution. These new functions will get the correct directory for each platform you are on as each platform implementation will have their own version of this function. I finished the UWP implementation before the end of the weekend but have been finishing up the rest of the platforms such as Windows, Linux, and Mac through the beginning of the week. So far GFile is up and running on UWP, Windows, and Linux with all unit tests passing. Mac is still a issue but that may be because I am running an older build of Gateware that has since had some bugs fixed which I am experiencing, but that is a separate issue in of itself.

    With GFile done and working, GLog kind of just fell into place and it worked perfectly without any issues or tweaking needed. Next, is fixing up the Mac issue and finally merging with Lari's GApp branch.


    
 


Tuesday, October 6, 2020

XCode and iOS with CMake: A Challenge

    I was tasked to come up with a way to develop Gateware on iOS. At the moment Gateware is using CMake to create the projects to develop on. This way the Gateware team can make multiple projects for every platform that they develop for. At first I took this task as making another project for iOS specfically, and made a simple CMake script that made an XCode project that would build Gateware for iOS. Making a XCode project purely for building on iOS is pretty simple, but I learned that it is possible for a single XCode project to have multiple build targets, and set out to make the Gateware MacOS project also build for iOS. 

    It turns out that adding an iOS target to an XCode project through CMake is not very intuitive as making an XCode project only with iOS. At first I tried to change the Project settings when generating the iOS target. The issue with this approach is that the other targets in the project were using the project defaults, and changing the project default to iOS would effect the other targets to also target iOS. I would have to change the targets already setup in the project for this strategy to work, as that would require a large rewrite of the CMakeLists file, I set out to find another way to go about this issue.

    After searching the internet for a way to have a Mac and iOS target in a single project for XCode, I ended up going back to the CMake documentation. I found that in CMake you can change specific target settings in XCode using the set_target_properies command and XCODE_ATTRIBUTE property. I ended up creating a dummy target in CMake and overwriting the necessary build settings to essentially convert a Mac executable to an iOS app. The next step would be to get Vulkan installing on an iPhone using CMake.

References:

https://cmake.org/

https://gitlab.com/gateware-development/gateware

    

Tuesday, September 22, 2020

Designing a Multiplatform Touch Interface Part 1

     With more people using mobile devices and touchscreen laptops, developers need to interface with the devices' touchscreen API. Gateware should be doing the same thing if it hopes to support mobile devices. One of the big issues with designing a touch interface is the different ways that devices store the touch information. A good rule is that an interface should provide as much functionality to the user that they should need to use.

    When designing this interface one should consider what major platforms support touch and which are most popular. iOS and Android devices are likely the most common touch devices with touch as a primary method of input. Windows and macOS also support touch input, but they are secondary input methods, as most Windows and macOS devices do not have touch input. Other notable devices are Nintendo Switch, Smart Watches, and embedded devices. As such, I propose this touch interface mainly for iOS and Android, but with expansion options for those other devices. 

    All of these different platforms have different information sent to the developer when the screen is touched. I have created a struct that supports the main information that most developers would need.

enum class GTouchState : unsigned int {
	TOUCH_BEGIN			= 0,	// Finger just hit the screen.
	TOUCH_MOVED			= 1,	// Finger moved from last position.
	TOUCH_END			= 2,	// Finger lifted off the screen.
	TOUCH_STATIONARY	= 3,	// Finger is on screen but hasn't moved.
	TOUCH_CANCELLED		= 4,	// System stopped tracking the touch.
	TOUCH_COUNT,
};
 
struct GTouchData {
	// Current location of the touch in screen coordinates
	float x, y;					
	// Change in location in screen coordinates
	float deltaX, deltaY;		
	// First recorded location of the touch in screen coordinates
	float initialX, inititalY;	
	// Change in time between the current position and delta
	float deltaTime;			
	// Pressure of the touch
	float pressure;			
	// Number of consecutive touches in the same general area
	unsigned int tapCount;		
	// What touch is this? Always less than the number of total touches on the screen
	unsigned int index;			
	// What state is this touch in?
	GTouchState state;			
};

iOS and Android support more information, but most of the information not included here is for stylus support, with information on how the stylus is aligned with specific axes. Most Gateware users won't need this information, as such I have omitted them to give some simplicity to this interface.

    Another part of the API is common support for touch gestures. Many mobile users are used to common touch controls to get around their mobile device. Swipe up to go down on a page, pinch to zoom out, and two finger spin for rotate, along with others are just some of the gestures that this API should support for a better experience for the user. As such here are the common gestures that most platforms recognize.

Single Tap

Double Tap

Rotation (Two finger)

Swipe

Pan (Two Finger)

Long Press

Pinch

    Here is the end for now. Expect another part to this blog once I have figured out what interface would be best for Gateware.


References:

    Touch data: 

        https://developer.apple.com/documentation/uikit/uitouch?language=objc 

        https://developer.android.com/reference/android/view/MotionEvent.PointerCoords

    Gesture recognizers:

        https://developer.apple.com/documentation/uikit/uigesturerecognizer?language=objc

        https://developer.android.com/reference/android/view/GestureDetector

 

     

Monday, September 21, 2020

Porting the Audio Libraries to UWP

The Task:

    Getting CMake to properly work with Gateware Win32 and UWP projects was no simple task but since it was finally done, I was able to start porting Gateware to UWP in earnest. I was given the choice of what to start with, I chose the audio libraries as it seemed like an (almost) one to one port was in the cards. I was right, but it didn't come without its issues.

The Problem:

    My initial evaluation of the audio libraries had me changing just a few functions here and there that were not available for UWP apps. This assumption did turn out to be correct but with some caveats. I based all of the UWP implementation of the GAudio libraries off of the Win32 ones as they are all build using XAudio2 which is compatible with UWP. After changing some key functions that were not compatible, ex. CreateFile() to CreateFile2(), I got everything compiling. Success? No, not really. When I would then run the Unit Tests, the program would call abort() from somewhere.

The Solution:

    The first thing to do was to find where abort() was being called from. Using the debugger I was able to track it down to the CreateFile2(). I was very surprised to find this as the documentation for the function says it is compatible with Windows Store apps. After reading through it again more thoroughly, I found this "When called from a Windows Store app, CreateFile2 is simplified. You can open only files or directories inside the ApplicationData.LocalFolder or Package.InstalledLocation directories." Knowing that I have not done anything with file I/O yet with any of the libraries, I knew pretty quickly this was the issue.

    Getting to the installed location of the app is done pretty easily with WinRT calls so that was not the hard part. Getting the Unit Test Resources, such as the audio files, in the installed location is what stumped me for the longest time. This is easily done in Visual Studios by just including it in the project and setting it to "content" in the properties but this needs to be done before a dev even opens up the project. Back to CMake I went as I already had the Asset folder being put in as content for the UWP app, so I figured it would not be any different for another folder. I was right in that thinking, it just took a lot of trial and error to get it just right.

    With that out of the way, I was able to get the sound files from the installed location and everything compiled and ran perfectly.


   As of right now, all audio unit tests pass.


Friday, September 18, 2020

The Extents of Controller Support

During testing last week, I had one of the manual Unit Tests fail for GController on the Mac. It was a test of the down button input on a controller, and no matter how much I pressed the button or reran the test, the input wasn't being detected. I found this very strange because I had tested it not too long before, and the test passed without issue. The difference between the time before and last week is that I was using a different controller.

Different controllers, different codes




The problem turned out to be a difference in input codes for the D-Pad between the two controllers. The generic controller I initially tested with used codes of 0 - 360 for the D-Pad. The PS4 controller used codes of 0 - 8. This discrepancy was easy to fix by adding support for both ranges of input codes. However, it got me wondering about the rest of the controller input.

Unit Test upgrade




The current Unit Test sampled a small portion of controller input, only conducting 11 input checks. On Monday, I made modifications to support the testing of each input supported by GController, totaling 40 input checks. I also changed the way the test behaved to speed up the ability to get through the test. The changes to the test were necessary to find out if there were more issues with controller inputs.

How a Unit Test is written matters

The first thing I discovered with the new Unit Test is a range problem with the trigger inputs. On Windows, the trigger inputs gave a value of 0 - 1. On Mac and Linux, the trigger inputs gave a value of 0 - 5. This difference was never caught during the original Unit Test because it looked for a value greater than 0 to satisfy the test's trigger is pressed state check. In the new test, the value must be exactly 1. The actual values both my controllers reported was 0 - 255. Initially, I thought dividing by 255 to normalize the value would be sufficient. I eventually discovered that I had one other thing to consider, how the device is connected to the computer.


Different connections, different values

When connected wirelessly, the trigger input of the PS4 controller gave values of 0 - 255. When connected via USB, the trigger input of the PS4 controller gave values of 0 - 315.

 


Thankfully, IOKit has a function for getting the maximum value of the trigger. Using that, we can futureproof the normalizing of the trigger input values so we will always have a range 0 -1 regardless of the controller or connection type. A solution like this is needed for the Mac with the controllers I have tested so far. It also may be necessary on Windows and Linux implementations as different controllers are tested. At this time, a solution like that isn't implemented on those platforms.


Different connections, different identities

Linux is sensitive to connectivity as well. The PS4 controller input mappings are different from the generic controller. When the PS4 controller is connected via Bluetooth, Linux identifies it as a Wireless Controller. Before, GController would use generic mappings for a controller with this identification. With the wrong mappings, the square and triangle buttons on the PS4 controller are swapped. I changed the code so that mappings for PS4 controllers are used when a Wireless Controller is identified. This may be an issue if other brands of wireless controllers use different input mappings, but I don't have any other controller to test that theory.


It's not over yet

Next week, I'll be able to test an official Xbox controller with GController. The library was designed with this type of controller, so I don't expect any issues. However, I wonder if there will be a difference depending on the connection type used. We'll see.


Sidenote

I tried using the steam controller on Linux to test GController. When connected via USB, some of the inputs worked, some were unresponsive, and some caused the Unit Tests to crash. I tried installing the driver for it by installing Steam and connecting it via Bluetooth. Unfortunately, that ended up crashing Linux and causing issues with X Server. I managed to get X Server working again, but it ended up changing my entire Linux GUI environment. If anyone tries this, beware.