Author: adam
I’ve been using this technique for a year or so and it’s awesome. Sadly it’s not something you’d ever find unless you knew to look for it, but I’d like more people to know and use this. It works beautifully for any situation where you have multiple lines of code that must stay together, but which have to remain separate – e.g. an API that requires you to call “manager.Begin();” … then your own code, then … “manager.End();”
We’re going to commandeer the “using” keyword for something it wasn’t originally designed for but which is a 100% legal use of it.
Have you ever needed to do this in C#?
In Unity there are some very common examples of this problem, most famously in the old GUI.* API (which is finally, slowly, being replaced by UIToolkit – but there’s a lot of GUI.* code still in live games).
GUILayout.BeginVertical(); GUILayout.Label( "Options" ); if( GUI.Button( "Option1" ) ) { Process1(); } if( GUI.Button( "Option2" ) ) { Process 2(); } GUILayout.BeginHorizontal(); GUILayout.Space( 20 ); GUILayout.BeginHorizontal(); GUILayout.Label( "[advanced]" ); if( GUILayout.Button( "Option3" ) ) { Process3(); } ... and now I have to remember which order to EndHorizontal, EndVertical, and how many times for each
This is a very simple example and yet already it’s both timeconsuming to type all those “GUI.EndHorizontal/EndVertical” (the IDE cannot autocomplete them for you because it has no idea what you want here) and also highly error-prone.
Problems: Code together … but apart
It’s annoying trying to remember all the bits you’ve Begin’d but not yet End’d, but the real problem comes when you need to edit that code, inserting another piece of embedded horizontal layout halfway down.
Or when you copy/past a chunk of it and try to re-use it elsewhere.
You might try to be smart and move this stuff to a method call, but … that quick becomes painful for two reasons:
- You need to generate method calls and insert them in your sourcecode based on context. You can do that, but now you’re having to pass-around functionpointers, it’s no longer a simple method.
- You don’t know which order they’ll happen in, or how many times, so now you also need to create a stack object to keep track of this. You also have to implement all the cases of Exceptions etc and making sure the stack unwindws correctly, and … and …
Solution: IDisposable / using{}
C# has a nice little feature that fixes all of this. It converts my code above into:
using( new Vertical() ) { GUILayout.Label( "Options" ); if( GUI.Button( "Option1" ) ) { Process1(); } if( GUI.Button( "Option2" ) ) { Process 2(); } using( new Horizontal() ); { GUILayout.Space( 20 ); using( new Horizontal() ); { GUILayout.Label( "[advanced]" ); if( GUILayout.Button( "Option3" ) ) { Process3(); } } } }
Two things have happened here:
- Improvement 1: We never have to remember to End() anything!
- Improvement 2: All code is now surrounded in {braces} making it much easier to read, and to edit safely!
Both of them are side-effects of IDisposable/using.
How to implement it
You have to create a class that holds the magic code. In the GUILayout.BeginHorizontal example I made a class “Horizontal”. This class has to implement Microsoft’s IDisposable interface – and there’s plenty of docs online for how to do this, it’s quite easy. Here’s a simple example:
class Horizontal : System.IDisposable { public Horizontal() { GUILayout.BeginHorizontal(); } public void Dispose() // but read WARNING below { GUILayout.EndHorizontal(); } }
The way that “using” is implemented by Microsoft is:
- The object is created as normal (when you call ‘new’ in “using( new Horizontal();”)
- The code in braces runs as normal
- When the close brace is encountered, MS destroys the temporary object you created
- …which has the side effect of calling “Dispose” on that object
- All of this is managed for you, and copes well with exceptions etc
WARNING: the simple example isn’t quite correct
If you subclass your Horizontal object this may start to go wrong because of how Dispose works. This is documented on MS’s official pages, but the simple explanation is that “Dispose” can be called more than once on the same object (when they’re subclassed), so you need to add some code to ignore the extra calls. Here’s the classic example:
class Horizontal : System.IDisposable { public Horizontal() { GUILayout.BeginHorizontal(); } public void Dispose() // perfect example { Dispose(true); GC.SuppressFinalize(this); } protected virtual void Dispose(bool disposing) { if (!_disposed) { if (disposing) { GUILayout.EndHorizontal(); } // Indicate that the instance has been disposed. _disposed = true; } } }
Performance optimization: ZERO garbage-collection!
If you care about performance then you typically want to make your core methods non-allocating. Creating temporary objects like in the above examples has no effect most of the time – but if you ever use it on a core method that you call tens of thousands of times a frame (note: anything less than tens of thousands and you probably won’t notice, unless your game is very performance-heavy) it’ll start to create enough garbage that GC becomes a problem.
We can fix that, and get rid of the double-dispose problem, by converting it to a struct.
Unfortunately … C# doesn’t allow structs to have a zero-argument constructor, so we have to add at least one fake parameter to make this work. In most cases there’s some obvious parameter you can think of that improves your implementation. e.g. here I’ve added the same optional param that GUI.BeginHorizontal has:
struct Horizontal : System.IDisposable { public Horizontal( params GUILayoutOption[] options ) { GUILayout.BeginHorizontal( options ); } public void Dispose() // but reading WARNING below { GUILayout.EndHorizontal(); } }
To save myself an hour next time I need to install JupyterLab (the latest 2020 version of Jupyter) here’s a step-by-step install from scratch on self-hosted AWS, warts-and-all. Key points:
- We want to use jupyter-lab instead of the legacy jupyter-notebook
- The notebooks/labs must be private, it’s absurd to think otherwise
- We want online / cloud access to our work
- We want the minimum of effort to install and get working with Jupyter
- We want the minimum complexity to maintain the installation
Core investigations + conclusions
Hosting: Use AWS
I researched many sources of 3rd-party hosting for Jupyter and they all … well, sucked. There was a great article on free hosting – but all of those forced your private data to be placed in public for anyone/everyone to take.
I looked at the ones that had private “upgrades” available from their free/public tier, but they came with intensely complicated procedures (e.g. you lose access to all your data) and confusing and difficult pricing (mostly designed for ML programmers, but irrelevant to people not doing ML).
Finally I went through the most widely-referenced “Jupyter hosting” companies I found via Google and forums posts and reddit etc etc. Some of these looked good, but nearly all of them were running either legacy Jupyter (which was superceded 2 years ago!) or their own custom “this isn’t jupyter, it’s a thing-that-is-a-bit-like-jupyter, missing core features, with our own proprietary changes” which is undesirable. Poster-child there was Google who – yet again – created a pointlessly proprietary system that you’re locked into, and which has a high probabilty Google will shut it down and delete all your data with no upgrade option (as they are currently doing multiple times a year :)).
Eventually I came back to self-hosting: how hard would this be? There are multiple guides on this, both from 3rd parties (some of them broken/incorrect with key steps missing) and from the official Jupyter website. Custom instructions were provided for AWS which was a good sign (along with GCE, Microsoft cloud, etc) Reading through the instructions they were essentially:
“Steps 1-20: Setup a new AWS default cloud instance (identical to all AWS hosting.
Steps 21-25: Do a couple of Jupyter-specific post-install steps”
…with the vast majority of the setup work being AWS (which isn’t a great install process, but if you’ve used AWS you’ve already done this many many times and are comfortable with it), self-hosting seemed the best way forwards.
AWS: tiers and options
Confusingly there’s two branches of Jupyter self-hosting: Jupyter and JupyterHub. The latter is a multi-users-working-on-one-machine with each having their own loing username and personal password, with private or semi-private notebooks etc. For myself as a single user that was overkill – although the install instructions were even simpler than the main Jupyter (ironically).
None of the install guides were detailed (or useful) in their advice on picking an AWS instance, apart from the JupyterHub one. With some digging on reddit it turns out that 100MB or so of RAM should be more than enough for running notebooks, with “as much more as the size of data you intend to hold in RAM”. If you’re coding very lazily you might try loading massive source data into RAM but … we’re sensible programmers, we don’t do that.
Summary: The smallest AWS instance - t2.nano - works fine with 0.5GB RAM, and the (2020) minimum EBS disk of 8GB SSD.
(If you hit the RAM limit: spin down your AWS instance, replace it with a t2.micro, attach the EBS to the new AWS, then spin it up again. This is exactly what cloud hosting was invented for! It’s easy, so no need to worry about it here)
UPDATE:
Jupyter doesn’t need much RAM.
But node.js – which a lot of UI-related Jupyter plygins rely on – requires gigabytes of RAM to work at all, and it crashes with useless error messages rather than handle its own errors.
It’s quite eye-opening how bad node.js code is (I believe this isn’t node.js itself, rather it’s an ecosystem and habits of people who write node.js apps – there’s nothing wrong with their choices, but they’ve prioritised embedding other people’s code they don’t understand (and mostly don’t need) rather than spend a few minutes writing simple code themselves. The net result is that if you want to install (or even reconfigure :() any of the JavaScript related plugins (which is all of the UI ones) then you’ll need to temporarily boost your AWS instance to a larger instance each time, and then downgrade it after the upgrade. 2GB RAM is recommended for node.js. To be clear: for actually using Jupyter, 200MB is more than enough – i.e. node.js alone requires 10x as much RAM as the entire system you’re running!
AWS install summary
If you’ve created EC2 instances before, the short version here is:
- A small EC2 instance
- Running standard ubuntu (18.04 or 20.04 as of this writing)
- A security-group which unblocks SSH (for manual install tasks) + one port (for web access)
Traditionally a lot of people unblock port 8888 but there appears to be no reason for this other than installation being done by people who don’t really know what they’re doing with web server configs. A bit like the node.js setup – people copying random pieces of instructions without actually reading them. In months of usage, I’ve found no ports need to be unblocked (why create a security hole when you don’t need to?).
Jupyter initial install
Installing Jupyter core is straightforward … and broken in ubuntu 18.04 (the main ubuntu release when I did this in early 2020, unless you’ve upgraded to 20.04 already). By default it will fail to install – this appears to involve known bugs, but the workarounds are so quick to do that no-one is in a rush to fix it. I’m assuming that in ubuntu 20.04 it’s been fixed.
Step 1: Update Ubuntu
Do your apt updates etc (I prefer to use aptitude for all apt management so I can see what’s happening and make more informed choices about versions etc – for me it’s hitting ‘u’ and let aptitude do it automatically).
Step 2: install jupyter
The apt is named ‘jupyter’ and should automatically bring in all the required modules, python3, pip, etc. It should also setup a basic install of jupyter with Python notebooks enabled etc. The magic of debian/apt!
Step 3: install the python parts of Jupyter (specifically: JupyterLab)
Here’s where it goes python-y and stops working. Unfortunately Ubuntu does not (yet) have an apt for JupyterLab that actually works – and it has to be installed over the top of a legacy Jupyter installation (that we already have thanks to parts 1 and 2).
What follows is all standard for python developers, nothing new. But if you’re not a python developer … You have to “install” jupyterlab by using python’s in-built self-management systems, which aren’t as good as the OS ones. No more apt for you :(. For extra pain: python wants you to jump through hoops to keep multiple copies of itself on the system – because the packages aren’t managed well enough for the OS to use python and for you to use it at the same time.
In my case: I have a dedicated server that is doing nothing but running Jupyter and I have no intention of upgrading python on one without upgrading the other. I was happy to use a single Python install, and avoid a lot of problems and confusion. Worst case if one does something weird in a future release I’ll simply wipe the server and re-install – we’re in the decades of commodity cloud computring, and re-installing OS’s in seconds, not hours.
In theory (from the docs), you run:
pip3 install jupyterlab
In practice, that:
- Installs it in a hidden folder buried inside the home-directory of the current user
- Fails to install it correctly: it cannot work
- Leaves a different version of Jupyter on the system that is missing the new features
FAIL. Maybe related to the expectation that you have multiple Python installs – but I wasn’t going to mess around with that added complexity only to discover that it was broken anyway (if it doesn’t work out of the box, I don’t trust it to work out of a more complex box…).
The nearest I found to an official workaround appears to be: update your linux user’s PATH to make it preferentially run the hidden-secret-silently-mis-installed version of Jupyter. Do that and everything appears to work fine. Or … just remember to always type
/home/ubuntu/.local/bin/jupyter
everywhere that you would normally have typed:
jupyter
Step 4: configure jupyter to work correctly
Out of the box Jupyter won’t work on a server: it’s been deliberately designed to fail even if you correctly installed it (this is not a bad thing: it’s designed to be idiot-proof for people who install on their personal laptop). To make it run as a server you can either mess around wasting time doing ssh-tunneling (why? WHY??!! Why have so many online guides told people to do this? Blind leading the blind, it seems…).
…Or: you can simply enable the server mode :), which works fine and is easier to setup (and cleaner).
But at first: you cannot do that. Out of the box jupyter can’t even be configured: it’s missing it’s own config file. Fortunately it has a ‘feature’ where you can get it to auto-create the config file (why it doesn’t do this as part of installation I have no idea), but it’ll be placed somewhere super-annoying:
jupyter notebook --generate-config
…and helpfully it’ll immediately tell you where it created it, probably:
/home/ubuntu/.jupyter/jupyter_notebook_config.py
As per this stackoverflow answer (https://stackoverflow.com/a/43500232/153422) you only need to change two lines to enable server mode. One line allows server access, and the other says “listen on all IP addresses”. You can find the commented-out lines in the file and uncomment them + change their values, or you can just copy/paste the values from that SO answer into the bottom of the file.
While you’re there, it’s worth changing the default values / inserting:
c.NotebookApp.open_browser = False
c.NotebookApp.port = 8888 (whatever you unblocked in AWS security group)
Step 5: run Jupyter-lab and login for the first time
To run jupyter-lab you need something like this:
~/.local/bin/jupyter-lab
Three things now happen:
- Jupyter is up and running and spits out lots of info to the command-line. Check it for errors – there shouldn’t be any
- It specifically tells you which IP addresses it’s listening on.
- It gives you a magic, temporary, token to login with.
The IP address list is wrong, but … at least it will show you that it’s listening to more than just localhost and 127.0.0.1 (if it’s only listening to them then you failed to edit the config properly).
It tries to guess the info using OS lookups, but they did it naively and they did it wrong: they will fetch private addresses that don’t exist, instead of the public ones. But if you’ve used AWS before you know how to get the public IP and/or public DNS from your EC2 management console (in AWS management console, select the EC2 instance, and click on the “Connect” button and you get a popup telling you exactly how).
So use the correct IP/DNS, go directly to that server / port address (ignore the ?token rubbish that the jupyter commandline app wanted you to use). You’ll get a login page where you can copy/paste the token from the commandline output and immediately login.
Or, better: jupyter’s login page helpfully gives you the option here to use the temporary token to generate a password you can use in future instead.
Step 6: Switch to TLS/HTTPS (make the web-browser connection secure)
Don’t self-sign, self-signing is being aggressively blocked by web-browser vendors (Google, Mozilla, Apple, etc) – again: ignore the bad advice and articles written by Jupyter users who don’t know what they’re doing. Instead use the free LetsEncrypt service for industry-standard automatically renewing signed certificates that you can fire-and-forget.
Follow the jupyter main docs directly:
https://jupyter-notebook.readthedocs.io/en/stable/public_server.html#using-lets-encrypt
Step 7 (final!): make Jupyter run as a service
This part shows how poorly this python app is integrated with the host OS – Jupyter doesn’t run as a service. You have to manually convert it into one.
Most people do this the brutally simple way: either run the command line with ampersand (i.e. linux’s “run-in-background but if you lose your ssh connection it might die”) or run it with ‘screen’ (linux’s more advanced multi-tasking app that makes it easy to re-access later and will survive even if you lose SSH connection or logout of ubuntu).
Much much better would be to convert it into a full ubuntu service – I googled the latest recommended instructions for this and followed them, but sadly I didn’t save the URL. It depends slightly whether you used new-style ubuntu (with systemd) or legacy (init) – but creating new ubuntu services is very common and quite easy to do so I’ll leave that for you to Google and find your preferred approach :).
I’ve always found ContentSizeFitter a source of great hope … and bitter disappointment: often it’s the only way to “solve” a problem in UnityUI, but half the time when you try it messes up your UI and breaks the UnityEditor Undo function. You have to delete your UI elements and rebuild them from scratch :(.
Unity’s own staff have pointed out on the forums that it only works in limited scenarios (some of what we think of as bugs they’ve explained as side-effects of Unity’s built-in layout algorithms, which make it “impossible” to expand content to fit. That’s not entirely true, since Flexbox4Unity is able to do all of these, and it’s embedded inside the Unity system :), but I agree the core Unity layout alogirithm isn’t great).
CSS/Flexbox has a much better layout algorithm (and a free, open-source, speification that anyone can implement). In the Flexbox4Unity plugin, this works nicely, and auto-resizing / size-to-fit UIScrollviews just happen “for free”, every time, no problems. I’ve posted a guide for setting up UIScrollviews this way inside Unity: http://flexbox4unity.com/2020/04/08/guide-automatically-resize-uiscrollview/
My simple OpenOffice spreadsheet for tracking Unity Asset Store revenues per-asset.
http://t-machine.org/wp-content/uploads/UnityAssetStoreRevenues-Template.ods
Usage/setup instructions included on the first page (scroll down).
To fill in the data, I use a script that scrapes the Unity publisher portal. But for short periods of time, you can easily fill this in by hand. My script needs some cleanup before I share it – some recent changes to the Unity webpage broke it and I haven’t updated it yet.
Unity Terrain is a good Terrain renderer, but the API’s behind it are famously badly documented and rather clunky (most of the documentation still hasn’t been written, almost 10 years after it was launched). At Unite this year they were showing-off some of the “new Terrain” features/tools, all of which were aimed at artists, and look great.
But what about the Terrain itself? Unity 2019.2 and 2019.3 still have the ponderous old API and it seems we’re stuck with it for at least another few years. Today I found and fixed an issue in my custom Terrain-tools that took our Editor rendering from < 4 FPS back to normal realtime speeds.
The secret is to use Unity’s required float[,,] arrays (which Microsoft only partially supports in C#) but plug the gap in C# which causes them to be slow when interacting with Unity’s Serializer (which fires 6 times per frame in the Editor, magnifiying any slowdown considerably!)
NB: As far as I can tell, you cannot fix the Unity “serialize 6 times even if it’s not needed, where only 1 would have been fine” issue, because the methods to do that only exist on Custom EditorWindows, and not on Custom Inspectors. But it’s bad practice to be slowing-down the serializer anyway, so I’m happy with fixing MY code to run fast, and then stop worrying about the Unity Serialization layers being inefficient.
The problem: float[,,] isn’t supported by Unity
Unity requires you to use float[,,] for textures/splats/alphamaps on their terrain.
However, Unity has never supported multi-dimensional arrays in their engine (this is finally getting fixed sometime in 2020, I believe, with the new Serializer). So your data gets wiped every frame. Thats a pain when making Terrain-editing scripts.
The workaround is to implement Unity’s ISerializationCallbackReceiver interface, and provide the missing code that Unity doesn’t (i.e. serialize a float[,,]). The standard way of doing this is something like:
NB: I’m only showing half of the serialize/deserialize here, just to illustrate the point
[code language=”csharp”]
void ISerializationCallbackReceiver.OnAfterDeserialize()
{
deltas = new float[_Serialize_2D_Length0, _Serialize_2D_Length1, _Serialize_2D_Length2];
/** NB: iterate in C#’s internal storage order for [,,] */
for (int i0 = 0; i0 < _Serialize_2D_Length0; i0++)
for (int i1 = 0; i1 < _Serialize_2D_Length1; i1++)
for( int i2 = 0; i2 < _Serialize_2D_Length2; i2++ )
deltas[i0,i1,i2] = _Serialize_1DArray[i0 * _Serialize_2D_Length1 * _Serialize_2D_Length2
+ i1 * _Serialize_2D_Length2
+ i2];
}
}
[/code]
…which retrieves every cell in the float[,,] from a cell in a private float[] (which Unity DOES support and will auto-serialize for you).
The problem is that C# for-loops are extremely slow when used like this, simply because of the scale of the operation. For a typical Unity terrain, you’re copying up to 4096 x 4096 samples (your splatmap) with anywhere from 5 to 10 values for each. Each value is a 4-byte 32-bit float.
i.e. 4k x 4k x 10 x 4 == 640 MB of data
…which destroys your 100+ FPS frame-time, taking it to 1 FPS or worse.
You need to copy this data in a single call, not in 640,000,000 separate method calls.
But … how?
Array.Copy() to the rescue!
It doesn’t work. You can compile your C# class, and then the C# runtime will cry when you try to execute it:
RankException: Only single dimension arrays are supported here.
Bummer. In theory, Array.Copy() would have solved the problem – this is literally what it was designed for: bulk copying of large arrays without the overhead of doing millions of tiny copy-calls.
Try again … Buffer.BlockCopy()
Fortunately there’s another method in C# core that steps-in and saves us. I often find that when C# ties your hands behind your back, the reason it hasn’t been changed/updated/improved is that there’s a lesser-known behind-the-scenes low-level method that you can (ab)use to achieve what you need, and the language maintainers recommend you do that instead of them updating the mainstream stuff. Fair enough!
The one caveat with BlockCopy is that you need to tell it the size in bytes that you’re copying NOT the number of array items.
i.e.: [code language=”csharp”]Array.Copy( from, 0, to, 0, length )[/code]
becomes: [code language=”csharp”]Buffer.BlockCopy( from, 0, to, 0, 4 * length ) // if copying float, or int, or any of the other 32bit primitives[/code]
20x faster Terrain data handling
The modified serialization callback becomes:
[code language=”csharp”]
void ISerializationCallbackReceiver.OnAfterDeserialize()
{
deltas = new float[_Serialize_2D_Length0, _Serialize_2D_Length1, _Serialize_2D_Length2];
int bytesPerFloat = 4;
Buffer.BlockCopy( _Serialize_2D, 0, deltas, 0, bytesPerFloat * deltas.GetLength( 0 ) * deltas.GetLength( 1 )*deltas.GetLength( 2 ) );
}
}
[/code]
One of Unity3D’s greatest successes has been the Asset Store. Bursting to life 9 years ago, initially sounding a lot like an optimistic clone of Apple’s 2-year-old App Store (and boasting the same 70%/30% revenue share), it turned out to be so much more.
But Unity still struggles to figure out what it should look like and how it should work for users/purchasers. The raw content is the biggest determinant of the store’s success, but closely followed by the browsing and discovery experience – which have hardly improved at all (and in some ways have gone backwards) over this past decade.
Based on hundreds of purchases, and having launched and maintained some small assets on the store myself over the past 5 years, here’s what I’d like to see now.
The A-test for Unity Assets
Every asset-purchase page should have a section that answers the critical, machine-answerable, fully 100% automatable questions that matter to purchasers. There is no excuse to miss this out – these make a huge difference both to users, and to authors, and to Unity itself: they massively reduce the amount of refund requests, and increase the purchase volume due to increased buyer-confidence.
How well do your assets score on these? If you’re an author, do you publish all this information up-front (some do – their Full Description on the asset page is long and scrolly)
Art Assets
- Min/avg/max verts per model in package
- Top 3 shaders in package, with number of models that use each
- Min/avg/max texture sizes in package
- Total number of materials with unassigned textures/colors vs fully assigned
- Total number of materials using Standard shader with Albedo, Metallic, Normal, Roughness maps assigned
- Total number of models with LODs vs number of models without LODs
- Min/Max LOD levels for models with at least one LOD
- Number of prefabs in package that have same prefix-name as an FBX/model file
Code Assets
- Number of files that include source code (C#) vs number without source code
- Number of (Unity official) Unit-tests in package
- (one line for each Unity version): Num Errors, Num Warnings, when installing the project
- (one line for each Unity version): Num Errors, Num Warnings, when opening the marked demo-scene
- (one line for each Unity version): Num Errors, Num Warnings, when pressing play in the marked demo-scene
- (one line for each Unity version): Number of Unit tests passed, number failed
All Assets
- Number of demo scenes in package
- PDF documentation in package
- Time since Author’s last edit of package (upload)
- Time since Author’s last discussion of package (meta files + comment threads)
Self-reported / author-tagged info
All the above was easily automatable by Unity (Apart from “issue reports”, which Unity keeps private, I’ve written scripts myself that do all of them!)
What happens when we extend the Asset Store Publisher Upload tool to let authors add extra info that Unity can then collate and publish? Well…
Art Assets – author controlled
- Render-pipelines supported: Default?, URP?, HDRP? (tickboxes)
- Player platforms compatible: Windows, PS, XB, VR, iOS, Android, WebGL (tickboxes: “compatible” means “author expects it to work, but isn’t actively testing that platform – it may not work out of the box”)
- Player platforms supported: Windows, PS, XB, VR, iOS, Android, WebGL (tickboxes: “supported” means “author actively tests on this platform and promises it will work out of the box (or fixed very rapidly)”)
Nice-to-have’s only Unity can provide
- Ask-the-author question box on purchase page (SO MANY TIMES people need to check if an asset supports X or Y, or ask the author if they include something, and Unity still provides no way for the purchaser to do this)
…with answered-questions automatically appearing for all to see (just like Amazon has done for the past 15+ years) - Number of reported issues per Unity version (OR: star-rating per unity version)
- Number of refund requests (successful + unsuccessful) by Unity version and/or package version
2020 future-looking awesome “make everyone rich” features
- When author uploads the asset, choose a Demo scene that will be auto-built as a WebGL build and embedded in Asset Store page (if build succeeds)
I witnessed three basic flaws in latest Android this week – all of them redolent of bad UX design on Google’s part – and surprising in an almost 10 years old OS (none of them are new features).
So … you may notice the site disappeared for some time, and now it’s back all images are missing. This is down to three things:
This once-obscure method, that – I guess – is the low-level call used by most of the new Unity GUI … is now the only way of drawing meshes in GUIs. The previous options have been removed, with brief comments telling you to use .SetMesh instead. Half of the DrawMesh / DrawMeshNow methods have also been removed (no explanation given in docs), which were my other go-to approach.
Unfortunately, no-one has documented SetMesh, and it has significant bugs, and breaks with core Unity conventions. This makes it rather difficult to use. Here’s the docs I’ve worked out by trial and error…
Shawn asked on Twitter:
If there were one internal class/method/field in the UnityEditor namespace you would want exposed properly, what would it be? #unity3d
— Shawn White (@ShawnWhite) June 29, 2016
We only get to pick ONE? :).
How do we decide?
There’s two ways to slice this. There’s “private APIs that would hugely benefit us / our projects / everyone’s projects / 3rd party code that I use (e.g. other people’s Asset Store plugins I buy/use)”. We can judge that largely by looking at what private API’s I’ve hacked access to, or decompiled, or rewritten from scratch.
Then there’s “what CAN’T I access / hack / replace?”. That’s a harder question, but leads to the truly massive wins, I suspect.
Stuff I’ve hacked access to
The Project/Hierarchy/Scene/Inspector panels
So, for instance, I made this (free) little editor extension that lets you create new things (scripts, materials, … folders) from the keyboard, instead of having to click tiny buttons every time.
There are no public API’s for this; that’s a tragedy. Most of these Unity panels haven’t been improved for many years, and are a long way behind the standard with Unity’s other improvements. They “work”, but don’t “shine”.
What could I do with this?
Well … a few studios I know have completely rewritten the Scene Hierarchy panel, so that:
- it does colour-coding of the names of each gameobject
- clicking a prefab selects both the prefab and any related prefabs, or vice versa, or hilights them
- added (obvious) new right-click options that are missing from default Unity Editor
- automated some of the major problems in Unity’s idea of “parenting” (parenting isn’t always safe to do; you can enforce / protect this with a custom scene hierarchy)
- made it put an “error” icon next to each gameobject that is affected by a current error.
- …etc
All massively useful stuff that helps hour-to-hour development, reducing dev time and cost.
It’s all “possible” right now by writing lots of horribly ugly and longwinded boilerplae code, and using the antiquated Editor GUI API.
But to make it play nicely with the rest of Unity requires also hacking Unity API’s for the various panels/windows, and detecting popups (and adding your own popup classes, since Unity keeps most of theirs private), and detecting drags that started in one panel but moved to another, detecting context-sensitive stuff that is not exposed by current API’s, … etc.
A better List editor
The built-in sub-editor (like a PropertyDrawer – see below) is very basic – really a “version 0.1” interface.
There is a much nicer one, that does what most Unity developers need – but it’s private and buggy (last time I tried, it corrupted underlying data. That’s presumably why it’s still private?)
@ShawnWhite The re-orderable, nicely rendered, editable List. (but @AngryAnt's is definitely a top-3 too)
— Adam Martin (@t_machine_org) June 30, 2016
Editor co-routines
Co-routines work perfectly in the Editor. (EDIT: thanks to ShawnWhite for the info): Unity doesn’t use co-routines outside of runtime; what appears to use them is OS-provided multi-threading. Strangely, when using that, I haven’t seen any of Unity’s ERRORs that are usually triggered by accessing the Unity not-threadsafe code from other threads – something weird happening in the OS?
Why doesn’t Unity support co-routines in the Editor?
I’ve no idea. There are many people who’ve re-implemented co-routines in editor, exactly as per Unity’s runtime co-routines. As a bonus, you end up with a much better co-routine: you can fix the missing features. But there’s some strange edge-cases, e.g. when Unity is reloading assemblies (which it does every time you save any source file), for a few seconds it presents a corrupt data view to any running code, and you if start running a co-routine in that time, it will do some very odd things.
Unity recently exposed some API’s to detect if Unity was in the middle of those reloads, but last time I tried it I couldn’t 100% reliably avoid them. An official implementation of Unity’s own co-routine code, that was automatically paused by Unity’s own reload-script code, would neatly fix this.
Until we have something like that, we’re forced to write two copies of every algorithm (C# doesn’t allow co-routine code to be run as a non-co-routine) so we can test in Editor, do level editing, debug and improve runtime features, etc … which is silly.
Stuff I CANNOT hack into/around
Serialization
Unity is the only engine I’ve worked with where the core data structures and transformations are opaque, hidden, can’t be extended, can’t be debugged. Tragically: also has many missing features, bugs, and serious performance issues.
There are good reasons for why this remains in such a bad state (It’s hard to fix. Meanwhile … it sort-of works, enough to write games in – you just have to occasionally write a lot of bad code, have to rewrite some ported libraries, have to know a lot of Unity-specific voodoo, etc).
But if it were exposed – we could (I would start on it tomorrow!) fix most of the problems. I’ve done proof-of-concepts with some terrifying hackery that show it’s possible – and a lot of the architecture is well explored, other ways it can be implemented, that could be given to developers as options (some would work better for your game, others might not; but you could pick and choose).
It’s too much to ask for (it intersects so much of the engine, and it would unleash a horror of potential bugs and crashes), but my number 1:
@AngryAnt @ShawnWhite **No. 1**: (but I know we won't get it ;)) Serialization. No more opaqued, invisible, non-extensible, broken types…
— Adam Martin (@t_machine_org) June 30, 2016
Callbacks for ALL core Unity methods
This sounds small but would have positive impact on a lot of projects.
c.f. my reverse-engineered callback diagram for EditorWindow in Unity:
…but we have the same problems for MonoBehaviour, for GameObject, etc. Not only are lifecycles poorly documented, but they’re inconsistent and – in multiple places (c.f. above diagram “Open Different Scene”) – they’re not even deterministic! It’s random what methods the Editor will call at all, let alone “when”.
Under the hood there must be reliable points for doing these callbacks … somewhere.
Undo
Undo has never worked in Unity. The worst stuff I narrowed down to ultra-simple demos that Unity’s own code was broken, I’ve logged bugs and Unity fixed them – but the current system is a horrible mess, much too hard to use. Many methods only randomly do what they’re supposed to, and there’s no way to debug it, because the internals are hidden.
If Unity exposed the actual, genuine, underlying state-change points, we could correctly implement editor extensions that support Undo 100%. I’d be happy to also use them to write an Asset that implements “easy to use Undo”, based on how other platforms have implemented it (e.g. Apple’s design of NSDocument is pretty clear and sensible, based on lists of Change Events).
Unity could then make “Undo that works” a mandatory requirement on the Asset Store. Currently it’s listed as mandatory, but no Asset has ever been checked for it (so far as I can tell).
Not least because Unity’s own code has had such problems supporting it!
PropertyDrawer: doesn’t quite do what it claims to (yet)
Recall what I said above: most of the Editor GUI/UX itself “hasn’t been improved for many years”. Unity made it user-extensible/replaceable many years ago – so in theory you could update / replace whatever you want. There’s a huge amount we’ve been able to update and customise (although it’s very expensive in coding time, due to a lack of modern GUI API’s, sometimes it’s well worth it).
But you can only replace the Inspector for a particular Component/MonoBehaviour. You cannot say “I want to replace the Inspector for GameObject’s that have Components X Y Z”.
Worse, if you wanted to replace e.g. the part of the Inspector that automatically draws a Vector … you can’t.
Unity had a great idea to solve one of these: Property Drawers. These would let you customise the rendering of sub-parts of an Inspector – the rendering of individual labels for member variables, list items etc.
IN THEORY this would let you write your own list-renderer that would work everywhere, and make lists very easy to use in the Editor – but only write the code once.
IN PRACTICE it was only implemented in a very basic way, and most of the things you want to use it for are blocked / inactive. There is NO WAY to fix this in user code.
(well, actually there is … c.f. . But this is a horrendous amount of work – AI’s author did a Herculean task! – and means you’ll never the benefit of future Unity UX / GUI updates, if there are any).
So: big upvote for exposing more of PropertyDrawer
@ShawnWhite Inheriting / extending built-in editors and drawers.
— Emil Johansen (@AngryAnt) June 29, 2016
WordPress had a critical update recently, and I got tonnes of emails (one from each blog I run) demanding I upgrade NOW. So I did, and upgraded Apache to latest while I was at it.
Oh dear. All sites offline. First:
Unable to connect
…then, when I fixed Apache, I got:
“Your PHP installation appears to be missing the MySQL extension which is required by WordPress.”
What happened, and how do I fix it?
Apache 2.4 upgrade is a bit dodgy in Debian
The Powers That Be decided to mess around with core parts of the config files. The right thing to do would have been to add some interactive part in the upgrade script that said: “By the way, I’ve made all your websites broken and inaccessible, because they need to be in a new subfolder. Shall I move them for you?”
Here’s the reason and the quick-fix too
Apache 2.4 brings in PHP 7.0, replacing PHP 5
PHP 5 is old, very old. Historically, PHP has also been managed in a fairly shoddy manner, very cavalier with regards to upgrades, compatibility, safety, security.
So … the standard way to run PHP is to have a separate folder on your server for each “version” of PHP. Everyone does this; PHP is so crappy that you have little alternative.
But this also means that when Debian “upgrades” to PHP7, there is no warning that the new config file – speciic to PHP7 – has been created and ignores the existing config file
This is wrong in all ways, but it’s forced upon linux user by the crapness of PHP. If PHP weren’t so crap, we’d have a single global PHP config file – /etc/php/config.ini – and maybe small override files per version. But nooooooo – can’t do that! PHP is far too crap.
(did I say PHP is crap yet? Decent language, great for what it was meant for – but the (mis)management over the years is truckloads of #facepalm)
So, instead, you need to copy your PHP5 ini over the top of your PHP 7 ini – or at least “diff” them, find the things that are “off by default” in PHP 7 but must be “on” … e.g. MySQL!
Enable them, e.g. change this:
[bash]
;extension=php_mysqli.dll
[/bash]
to this:
[bash]
extension=php_mysqli.dll
[/bash]
…and restart Apache. Suddenly WordPress is back online!
[bash]
/etc/init.d/apache2 restart
[/bash]
Instructions:
- Copy/paste this into your functions.php (TODO: convert it to a standalone php file, and make it into a plygin you can activte/deactivate)
- Create a new menu item of type “custom URL”
- Make your URL “http://#latestpost:category_name”
- where “category_name” is the name of the category whose latest post you want to link to
- Make the name whatver you want to appear on the menu
- Profit!
Based on an idea (with some upgrading + bugfixes for latest WordPress in 2016) from http://www.viper007bond.com/2011/09/20/code-snippet-add-a-link-to-latest-post-to-wordpress-nav-menu/
[php]
/** Adam: add support for putting ‘latest post in category X’ to menu: */
// Front end only, don’t hack on the settings page
if ( ! is_admin() ) {
// Hook in early to modify the menu
// This is before the CSS "selected" classes are calculated
add_filter( ‘wp_get_nav_menu_items’, ‘replace_placeholder_nav_menu_item_with_latest_post’, 10, 3 );
}
// Replaces a custom URL placeholder with the URL to the latest post
function replace_placeholder_nav_menu_item_with_latest_post( $items, $menu, $args ) {
$key = ‘http://#latestpost:’;
// Loop through the menu items looking for placeholder(s)
foreach ( $items as $item ) {
// Is this the placeholder we’re looking for?
if ( 0 === strpos( $item->url, $key ) )
{
$catname = substr( $item->url, strlen($key) );
// Get the latest post
$latestpost = get_posts( array(
‘posts_per_page’ => 1,
‘category_name’ => $catname
) );
if ( empty( $latestpost ) )
continue;
// Replace the placeholder with the real URL
$item->url = get_permalink( $latestpost[0]->ID );
}
}
// Return the modified (or maybe unmodified) menu items array
return $items;
}
[/php]
Master of Mana was a great game – much better than Civ5, and from what we’ve seen of Civ6, Firaxis is still playing catch-up in a few areas :).
The author has disappeared, and his website has been taken over by scammers (not even going to link it), but the community has kept going the SourceForge-hosted copy of the source and continues to update it. The files are ordered confusingly (inherited from previous projects, and Civ4 itself, which was mainly shipped as a commercial game, not as a moddable game!). Here’s a few key links to find interesting / useful game-design gems:
- Folder with the XML files from community’s updated Civs, Techs, Units — all the game stats
- Folder with all the C/C++ source code (only needed for major game-changing mod features)
- Folder with most/all the custom Python scripts for the MoM civilizations (i.e. where 95% of Master of Mana is implemented)
- Base definitions of all the customized Civilizations (Their starting techs, their heroes, the units they start with, which units they can/cannot build or have special versions of, etc)
- Definitions of all the special buildable city-buildings in the game
Centers of tiles | Edges of tiles |
---|---|
Pros and cons
- Centers gives you STRAIGHT things (on a hex grid, it’s the only way to get straights!)
- Roman Roads
- Canals
- Large rivers
- Edges gives you meandering things (on a hex grid, centers only give wiggles at very large scale)
- River valleys
- Realistic medieval roads
- Modern roads in mountains and hills (tend to wiggle crazily)
- Movement is simplified with centers: If you’re on the tile, you’re on the road/river
- Inhibition of movement is simplified with edges: Civilization games have traditionally given a move penalty AND a combat penalty to any tile-to-tile move that crosses an edge containing a river
My leanings…
One thing in particular that struck me from looking at the pictures:
Straight roads look so terrible that every single Civilization game since Civ1 has artifically wiggled them when rendering!
In particular, with 3D games (Civ4, Civ5 especially) this actively damages gameplay – it’s much too hard for the player to see at a glance which tiles are connected by roads, and to what extent. So much so that they cry-out for a “disable the wiggling effect on road-rendering” setting.
Also: I’m happpy to solve the “movement” problem by saying that if you’re in a tile that borders a road or a river, you are assumed to be “on” that road/river, with special-case handling under the hood that handles cases where two roads/rivers border the same tile. It increases the connectedness “for free” – but that’s how Civ games tend to do it anyway: encourage the player to put roads everywhere!
Thoughts on a postcard…
Warnings are very, very important in any compiled language: they tell you that the computer has checked your code and realised you “probably” created a bug; they even tell you something about what the bug might be.
..but the computer isn’t sure – if it could be sure, it would be a compiler Error (in red). So (in Unity) it’s yellow, and “optional”. But in those cases where it’s not a bug – and you know it! – it’s very annoying. Most IDE’s let you turn them on and off, Unity doesn’t … here’s how to fix it.
Current features
commit 26eafb7865965fd5ef5ee3ad4863f00acf8d10a2
- Generates hexes landscapes, with heights (Civ5 bored me by being flat-McFlat-in-flatland)
- Every hex is selectable, using custom fix for Unity’s broken mouse-click handler (see below)
- Any object sitting on landscape is selectable (ditto)
- Selected units move if you click any of the adjacent hexes (shown using f-ugly green arrows on screenshot)
The green “you can move here” arrows look like spider-legs at the moment. #TotalFail. Next build I’m going to delete them (despite having spent ages tweaking the procgen mesh generation for them, sigh) and do something based on wireframe cages, I think.
Techniques
Hexes
I started with simple prototyping around hexes, but soon found that it’s worth investing the time to implement all the primitives in Amit’s page on Hexagon grids for games: http://www.redblobgames.com/grids/hexagons/
In practice, especially the ability to create a class that lets you do “setHex( HexCoord location, GameObject[] items )” and “getContentsOfHex( HexCoord location )” and things like “getNeighboursOf” … is very rapidly essential.
Mouse clicks in Unity
IMHO: work pretty badly. They require the physics engine, which – by definition – returns the WRONG answer when you ask “what did I click on?” (it randomises the answer every click!). They also fundamentally oppose Unity’s own core design (from the Editor: when you click any element of a prefab, it selects the prefab).
So I wrote my own “better mouse handler” that fixes all that. When you click in scene, it automatically propagates up the tree, finds any listeners, informs them what was clicked, and lets you write good, clean code. Unlike the Unity built-in version.
Procedural meshes for arrows
With hindsight, I should have just modelled these in blender. But I thought: I want a sinusoidal curve arrow; how hard can it be? I may want to animate it later, by destroying/adding points – that would be a lot of work with Unity’s partial animation system (it’s great for humanoids, less great for geometry) – but animating points in a mesh from C# code is super-easy.
In the end, I spent way too long tweaking the look, and on having 2-sided polygons that broke the Unity5 Standard shader by being too thin (on the plus side: I now know what that mistake looks like, and I’ll recognize it in future Unity projets. It has a very peculiar, unexpected, look to it).
I should have just made them in Blender, and – if I got as far as wanting to animate them – re-modelled in source code later (or found a “convert blender file to C vertices array” script, of which I’m sure there are hundreds on the web. Doh!
#lessonLearned.
Every week, I have to use six different Office Software Suites:
- At school: Microsoft Office 2013
- At university: Microsoft Office 365
- At work: OpenOffice
- At home: LibreOffice
- Everywhere: Apple Keynote
- Everywhere: Google Docs
As an expert computer user (former SysAdmin), I’m often asked for help by people with non-computing backgrounds. When they see how many different suites I’m using, they’re … surprised, to say the least. Here’s a quick snapshot of what and why.
Unity is still the only major game-engine with an effective, established Asset Store. This is an enormous benefit to game developers – but do you feel you’re making full use of it?
I’ve bought and used hundreds of Unity plugins, models, scripts, etc from 3rd parties. I’ve found some amazing things that transformed my development.
TL;DR: please share your recommended assets using this form: http://goo.gl/forms/G3vddOdRL3
Things we want to improve
This is a shortlist; if you’ve got areas you want to improve, please add a comment.
A few months ago I ran a survey to find out which programming-languages people were using with Entity Systems:
https://docs.google.com/forms/d/18JF6uCHI0nZ1-Yel76uZzL1UfFMI21QvDlcnXSGXSHo/viewform
I’m about to publish a Patreon article on Entity Systems (here if you want to support me), but I wanted to put something up on my blog at the same time, so here’s a quick look at the stats.