NGUI Implementation and More

This weekend I was able to put in a lot of time into Townsend. In a nutshell, I spent a lot of time on the UI system and being able to pick things up from the ground (ie: harvesting). Apart from learning how to use NGUI, this involved making an inventory system, figuring out how to make icons of ingame objects, and hooking up the touch input code to everything.

This is the same demo as yesterday’s post. New demo in the next upcoming post :-)

It is immediately clear upon looking at NGUI’s code that it was developed by a programmer who knows what they’re doing. Everything is coded minimalistically and then combined into more complex entities, allowing for snippets of code that are easier to understand and honestly reusable in non-UI circumstances. Obviously my code is a befuddled mess in comparison (80% of it is in one class, for instance). Thankfully, the system comes with plenty of documentation and example scenes to pick apart and borrow from.

After going through the video tutorials, I grabbed the most complex scene and spent a good two hours figuring out how the scripts worked together and consequently what pieces I needed to re-use. In a relatively short time I was able to generate a button that would toggle an icon on and off, and from there a grid that would toggle between icons when you clicked on it. Sweet! And then I threw it into my game scene and watched it fail miserably. Long story short, I had to set up cameras and layers to work with the new system, as well as modify my touch input code so as to not interact with both the UI and the level geometry when clicking on an icon. From there, I figured out how to scale the UI based on the screen height so that the UI isn’t extremely small/large depending on the resolution of the window/device it’s being played on. There are a couple of ways to do this, but in the end I decided that basing both the x and y scale of the UI off of the height of the screen, I could get a pretty good success rate assuming the aspect ratio of screens/windows don’t change too drastically. I’m still not convinced I did this right, and it will probably require tweaking later.

When I was modifying my touch input code I decided that I drastically needed to refactor it. Remember how I said there was one file that had basically all of my custom code in it? I’ll give you one guess as to which file that is. The worst piece of it was that when making the touch interaction code concurrently work with click input, I decided that I had to separate the code out into separate functions for each piece of input. In the beginning it wasn’t bad, but by the time I had to re-work it, the functions shared a ton of code; I had guessed about 80% before I started refactoring it. It turns out that about 95% of the code was being¬†unnecessarily¬†duplicated and after analyzing the code, it took me about 5 minutes to do what would have saved me a couple hours of frustration throughout the previous development.

The next big piece to make for the UI before I tested it on a device was an actual icon for my test plant. As much as I’d like to be, I’m not a great vector graphics artist, nor a pixel pusher. In fact, what little artistic talent I have is basically an emulation of styles and treatments that I see. So, for now, I decided that I would take the easy way (for me) and take screenshots of ingame assets for further tweaking into icons. It turns out that this isn’t trivial on the free version of Unity, but as with most hard things to do with Unity, someone has already figured out how to do it. I found a sweet little photo-shoot scene someone had made for creating sprites of unity animations (explosions, for instance), and whoever made it cunningly figured out how to generate alpha values for the background based off of the difference between two renders of the scene, one with a white background and the other a black background. I found it a very clever work around, and it works great. From there it was just a few photoshop filters from a stand-in icon graphic.

The last thing to do before testing on a device was downsampling all of the textures in the cartoon nature pack I got to decrease the file size and build time. Unfortunately, after toggling hundreds of import settings I realized that the generated textures for my trees were being compressed to a quality that was absolutely terrible for gradients. As you can see (below), the big vector leaves that make up the trees badly suffer from the compression. So, after figuring out exactly how the textures were being generated, toggling the compression import settings on the base files, toggling the same import settings on the generated textures, and finally forcing the tree creator system in Unity to re-generate the source assets for each cartoon tree, I was good to go. Yay!

To my delight, everything worked on the touch device flawlessly. Victory!

Next Up…

Hook in the UI to picking up crops
A proper inventory hooked up to the UI
Stacks of items
Design and implementation of UI with touch interaction
Re-evaluate the pick up/plant/interact-with controls. It’s still hard to do even after making the grid cells larger (and I may need to make them even larger for gameplay balance/looks)
Make the camera into a springy/tweeny camera so it’s not so rigid. Should be interesting.

This entry was posted in Townsend. Post a comment or leave a trackback: Trackback URL.

One Comment

  1. Posted April 27, 2013 at 1:20 am | Permalink

    Thank you for this post, man! Good work!

Post a Comment

Your email is never published, nor shared. Entering your website is optional, but good for networking!


Available HTML
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>