Categories
programming

SVGKit 2013 – Development

SVG is an awesome image format thats widely used, works in all browsers. SVG graphics make better apps and better games – and automatically “upgrade” themselves for future devices.

This post explains the underlying code architecture of SVGKit – the open-source SVG implementation for iOS/OS X; the target audience is developers who want to help improve SVGKit (adding missing features, fixing bugs, or making it more compliant with the SVG Specification)

Goals

Primary goals of the SVGKit project:

  1. 100% compliance with the SVG Specification
  2. Seamless integration with iOS (iPad/iPhone) and OS X
  3. Performance better than PNG/JPG/bitmap graphics
  4. …a library good enough that Apple would have liked to have included it in iOS

NB: the license terms for SVGKit are, without prejudice: “you can do anything you want with this, so long as you give credit to the SVGKit authors for their work”. Many of us are using it in commercial projects.

Core structure

The SVG Specification forces us to split the library into two parts, from the very start:

  1. SVG Spec – 100% defined by the W3 Consortium
  2. Native rendering – approx 10% defined by the W3 Consortium

The SVG Spec does have *some* requirements on the native rendering, and it has a lot of “guidelines” – but on the whole, it’s undefined, so that we can provide an implementation that makes sense on our platform (iOS/OS X).

I’ve divided this up into independent sections:

  1. SVG Spec
    1. Locating an input stream (e.g. a file, or an HTTP URL)
    2. XML parsing (low-level)
    3. DOM parsing from XML
    4. SVG parsing from DOM
  2. Native rendering – approx 10% defined by the W3 Consortium
    1. Conversion from SVG + DOM to SVG data (including: cascading, as per CSS (required by SVG Spec!))
    2. Dynamic changes to render data, to support Vector Graphics (Apple’s runtime support for vectors is – ironically – weak)
    3. Export to disk, using the latest copy of your modified DOM
    4. Export from SVG data to OpenGL (via raw bytes), to Apple’s (CAlayer/UIView), and to arbitrary CGContextRef instances

“Apple’s support for vector graphics is weak”

This was the biggest surprise to me: Apple has spent a decade marketing their OS (Mac / OS X) as “vector based”, etc.

In practice … OS X libraries were usually sparsely documented by Apple, and until iPhone came along, they were messy, buggy, poorly designed, and full of “out of date” methods. With iPhone OS (now renamed “iOS”), Apple cleaned their house out, and made some very lean, clear, logical APIs (with many fewer bugs!). They also – finally – documented it all.

That’s an amazing achievement, it’s very impressive. But along the way (probably to save time) they ignored some parts. The original iPhone’s CPU and GPU were very weak (compared to today), so it’s no surprise that Apple didn’t update their vector graphics libraries. iOS (as of 2013) is still using the under-documented and flawed OS X classes.

(NB: the lack of documentation also means that very few people know how to use Core Animation/Quartz/CALayer for high performance – you have to “experiment” and deduce what Apple *might* be doing, and test extensively. [Incidentally, there’s a lot of misinformation around – rumour and theory, in the absence of official docs from Apple])

Find the link for CALayer and bookmark it. This core class is where Apple’s vector libraries and main rendering intersect. It’s powerful – but it’s ugly and bloated too.

Simple bits, see elsewhere

“Locating an input stream (e.g. a file, or an HTTP URL)”

c.f. the SVGKit Usage post. This stuff is very simple, but it’s lacking features. Would be great for you to add some new SVGKSource subclases, with better features.

“XML parsing (low-level)”

Currently uses libxml (because that is built-in to iOS, OS X, and Xcode).

This wraps libxml, and adds three features:

  1. Captures every parse-error, and provides a list + line numbers when parsing is finished (libxml doesn’t have this feature by default)
  2. Converts low-level libxml C library to high-level ObjectiveC calls
  3. Provides a “modular” parsing system, where parsing code is very simple to write

On the whole, we have NO INTENTION of changing the parser – it works, and its intended to be as simple as possible. It’s really just an upgrade to libxml.

But there’s one thing it’s missing that we’d love to add:

  • Streaming / interrupt-based parsing

This is potentially more efficient in CPU and memory usage (not much, since we HAVE to use DOM – it’s required by the SVG Spec), but requires making the SVGKParser.m class a bit cleverer.

“Export from SVG data to OpenGL/NSData/CALayer/CGContextRef/etc”

Check out the “Exporters” sub-folder. It contains simple example classes – one per exporter – showing how to efficiently use SVGKImage to help you export stuff.

Note that approximately half of all SVG files have NO SIZE! – they are “infinite” – and you want to re-use SVGKImage’s code for calculating “correct”, or “best guess” sizes.

Since UIView uses CALayer’s internally, you can take any CALayer and add it to a UIView ([UIView.layer addSublayer:(CALayer*)myCALayer]).

Complex parts

“DOM parsing from XML”

The way this works is very rigidly defined by the SVG Spec, and you absolutely must stick to the Spec.

DOM is a major web standard, and the SVG authors thought it would save everyone a lot of time to re-use it.

Unfortunately – tragically! – iOS has no DOM implementation:

  1. Apple has a private implementation available in Safari. It’s not entirely private (we had to rename one of our classes because of a careless name from Apple), but Apples policy is “if it’s not explicitly public, we can reject your app for using it”. In theory, we could get access to this via WebKit source, or via an embedded WebView. But it would probably be much slower, and use a lot more memory, than our current native implementation
  2. There are a couple of open-source implementations, most of which have sadly been abandoned by their authors. Also, most of those I looked at are incomplete, and non-compliant; we can’t afford to rely on them.

The process for adding / modifying DOM classes goes like this:

  1. Copy/paste the DOM official class name (including the capitalization)
  2. In the header, paste the HTTP link to the *paragraph* of the DOM specification that defines that DOM class
  3. …then copy/paste the DOM’s interface/class declaration (usually 5-10 lines of code beginning “interface”, and blockquoted)
  4. Copy/paste that a second time, this time as the ObjectiveC Interface
  5. Convert every “variable” to an ObjectiveC @property
    • Note: by definition, you are supposed to replace DOMString with NSString*
  6. Convert every “method” to an ObjectiveC “-(something) methodSomething:(something);” method – NB: do *not* implement as C-methods
  7. Fill the .m file with @synthesize directives
  8. Create a blank method in the .m file for each method, and put an “NSAssert( FALSE, “Not implemented yet” );” in there (or implement it yourself)
  9. Any other DOM classes that are used as variables or method parameters … do all the above again
Hiding NSArray and NSDictionary behind SVG Spec methods

The SVG Spec is designed to work in ANY programming language – so it doesn’t support some core features of ObjectiveC, such as fast enumeration (i.e. the “for( NSObject* o in array)” syntax).

A much bigger problem for you is that you can’t include “init” methods, which are necessary for good ObjectiveC code.

Our DOM and SVG classes *must* be spec compliant, so we cannot expose the raw array – and we can’t add methods to provide fast enumeration, nor custom init methods.

Instead … when you have a situation like that, and you want users to be able to (optionally) access them … go ahead and do it, but put the “bonus” methods into a separate header file.

In Xcode, this is called a “class extension”, and it’s a special feature of ObjectiveC. Select “Class Extension” when creating the new file.

e.g. look at the source for Nodelist.h and NodeList.m – and notice that some of the methods are missing from the header file, but appear in NodeList+Mutable.h

In general, you should use the following naming strategies:

  • If the bonus features are needed to modify properties that SVG Spec says are “read only”, name the extension “Mutable” to make it clear that’s what it’s for
  • If the bonus features are ONLY a convenience, e.g. to enable fast enumeration, name the extension “NotInSpec” or similar

“SVG parsing from DOM”

Again, the SVG spec rigorously defines the name of every “SVG” class, and its methods, and its variables. You must follow these exactly.

The process is identical as for the DOM Spec notes above.

NB: SVG was designed and intended to be implemented on-top-of DOM; many of the SVG Spec methods are trivial to implement if you use the DOM methods that already exist. You are not supposed to re-invent the wheel!

For instance, have a look at DOMDocument, and Node, and Element – they have some very useful methods built-in to them.

Remmember: if you call SVGElement’s init method, then every SVG tag has already been parsed into a DOM Element (which extends DOM Node). It already has all the XML attributes pre-parsed and available to you!

Gotcha 1: SVG attributes are NOT nil

SVG Spec defines that “empty” or “missing” attributes have to be returned NOT AS NULL but as an empty string (“”).

This means you must NEVER write:
[objc]
Attr* fillAttribute = [self getAttribute:@"fill"];
if( fillAttribute ) // DO NOT DO THIS!!!

[/objc]
…because according to the spec fillAttribute can be non-null even though in the SVG it’s blank. Instead, you must (according to spec) do:
[objc]
Attr* fillAttribute = [self getAttribute:@"fill"];
if( fillAttribute.length > 0 ) // This is correct, according to SVG Spec

[/objc]

Gotcha 2: XML Namespaces

You can parse a lot of SVG’s and ignore namespaces; most SVG’s use the same “convention” for naming the XML tags.

It’s a convention; it’s a default; it IS NOT GUARANTEED.

But XML-namespaes are guaranteed. All SVGKit code should be using namespaces explicitly.

As a convenience for users, the DOM spec allows us to provide methods that do NOT need an explicit namespace – but you should not be using them! They will occasionally fail when used on some input SVG files

So, for instance, you should NOT do this:
[objc]
Attr* fillAttribute = [self getAttribute:@"fill"]; // DON’T DO THIS (it’ll work 99% of the time, but … best not to)
[/objc]
instead do this:
[objc]
Attr* fillAttribute = [self getAttributeNS:svgNamespace localName:@"fill"]; // CORRECT. (svgNamespace is the HTTP URL of the official SVG Spec)
[/objc]
At the moment, we don’t have a convenience method for “get the namespace that means SVG” – this really should be part of the SVGKparserSVG extension.

NB: if you’re afraid this namespace stuff won’t work, note that SVGKparser already has full namespace support, and will automatically create the SVG namespace if needed when parsing incoming SVG files

Gotcha 3: Cascading (as in: “Cascading Style Sheets” i.e. CSS)

The SVG spec officially is based on DOM; but it’s also (officially) based on CSS.

Fortunately, we only have to support a subset of CSS – the two parts that SVG uses are:

  1. Embedding stylesheets, or referencing them with an external “link” tag
  2. Cascading

But cascading is tricky. There are approximately 50 XML attributes that – officially – must be “cascaded” when using SVG. There’s a table of them in the spec – http://www.w3.org/TR/SVG/propidx.html

Cascading is tricky, and it’s potentially quite slow – you have to look-up the property in many different places, and check each one “in correct order” until you find the first match.

So, we have a method in SVGElement that does all this for you:
[objc]
-(NSString*) cascadedValueForStylableProperty:(NSString*) stylableProperty
[/objc]
…but that is not part of the SVG Spec, and it’s possible it *is* part of the CSS spec, but located in a different class (I haven’t found it yet). We’ll leave that method there as a convenience, but you might need to import a special header to access it (since it’s not part of the SVG Spec).

To use cascading (which you MUST do), instead of this:
[objc]
// DON’T DO THIS (it ignores cascading and styles and CSS-classes)
Attr* fillAttribute = [self getAttribute:@"fill"];
[/objc]
…do this:
[objc]
// Automatically does all the CSS stuff for you
NSString* fillAttributeValue = [self cascadedValueForStylableProperty:@"fill"];
[/objc]
…eventually, we’ll add an error / NSAssert for cases where you pass in a property that is not one of the cascadeable ones – for now, just use the table as a reference.

Class names and method names

We couldn’t use a classname prefix of “SVG” because the SVG spec reserves all classnames beginning “SVG”. Inside the project, you’ll find all of these in the “SVG DOM” folder – please note: these are MANDATED by the SVG Spec, we did NOT come up with the names.

Apple had a similar problem when they invented GLKit – the prefix “GL” was already used in the OpenGL library they were extending, so they named their classes “GLK” prefix. Hence … “SVGK”)

Whenever you create a new class that is not part of the SVG Spec – for any reason – you must prefix the name with “SVGK”.

Some of our classes – for historic reasons – don’t follow this convention. Yet. Feel free to refactor any you encounter.

Categories
bitching Web 0.1

Microsoft’s Fraudulent Windows8 “upgrade” offer?

Windows 8

It’s great, it’s beautifully presented, and the best OS I’ve used in the last 20 years or so.

It makes OS X look clunky (which, let’s face it – for Microsoft – is one hell of an achievement)

The upgrade

My primary windows machine (used to) run XP. Microsoft has a “special offer” to upgrade you to Windows 8. So I took it, and paid the extra for the physical DVD to be sent to me. That was on November 20th – more than 3 weeks ago, and it never arrived.

In the meantime, Microsoft auto-downloads and installs “Windows 8”

Or they claim to…

The bait-and-switch

…in the weeks since, I’ve found LOTS of Windows apps crashing, with “out of memory” errors on my 12 GB RAM machine. WTF?

After days of searching, I eventually found the cause:

Microsoft will charge you for 64bit windows BUT ONLY GIVE YOU 32bit windows

They never state this.

Allegedly, the DVD they send (or not, in my case) happens to contain the 64bit version. You won’t know this, but if you work it out, you can allegedly delete the crap they install on your system and replace it with the correct, actual, Windows 8.

The problem: Installed Physical Memory is different from Available Memory

32bit Windows 8 running on a 64bit CPU is ridiculous, from any perspective.

If you run “Device Information”, you’ll see a massive discrepancy between the memory that Microsoft agrees is in your machine (8Gb, 16GB, 32GB etc), and the memory Windows is willing to use (typically: 3.1 GB, 2.9GB, 3.5GB or similar).

There’s nothing you can do to make windows “enable” your memory – a 32bit copy of Windows cannot access more than 4GB of memory, by its very nature.

Good luck finding this out – Microsoft’s own website, if you select “windows 8” and search for “RAM” or “memory” instead takes you to Windows-7 specific problems. Sigh.

Addendum 1: Microsoft support

  1. Microsoft’s “Live Support” personnel HUNG UP 5 seconds into the live-chat
  2. Microsoft’s official email address that sends the electronic order info … has an auto-responder saying it’s not ACTUALLY an email address, it’s a fake

What can you do? … not much.

Addendum 2: Microsoft’s ‘other’ support

*IF* you can get through to Microsoft’s generic, non-Windows8, support, you might be in luck.

That way, I finally got into a livechat with someone from Microsoft who “reprocessed” the mailing of the DVD. It’s a 1-2 week wait (how are they sending these things – by pigeon??), and we’ll see what happens…

They also gave me a different download link for Windows8, which they specifically stated was the 64 bit version.

…12 hours later…

Nope! Microsoft lied again: it re-installed the OS it was already running, with zero changes. Still 32bit. Still application crashes left, right, and center.

Categories
amusing marketing and PR Web 0.1

HSBC’s web team: WTF?

Why does the login URL for internet banking:

http://www.hsbc.co.uk/1/2/marketing/businessinternetbanking

…redirect to the newsletter for global investors:

https://investments.hsbc.co.uk/article/world-selection-newsletter

?

Do you *want* people to think your website has been hacked?

Or do you just not know what a cool URI is?

I think your VP Marketing / Marketing Director needs a slap upside the head…

Categories
amusing funny

Epic Rap Battles of History compilation…


“Commander of the third reich,
and a little known fact:
Also dope on the mic!”

EDIT: in case you missed it: “… of the people … by the people … for the people … EAGLE!”

Categories
programming

ObjectiveC: how to make an abstract class / forbid the “init” method

Abstract classes: saving programmers from each other

For trivial apps, no-one cares. But most libraries take huge advantage of the concept of “subclassing”, and programmers using those libraries need to make intelligent choices about “which subclass do I use?”.

Thanks to auto-complete, or “because it sounded what I needed at that moment in time” – or simply “because I was tired” – your base class gets instantiated when it shouldn’t have been. And strange bugs come from it, wasting everyone’s time. You might argue “not MY time”, but I’m a strong believer in writing code that EITHER does the obvious OR protects the people using it – my code doesn’t crash, it checks for obvious mistakes (e.g. checks that a file exists before loading it!), etc.

In the long run, that frequently comes back to help you: when YOU then re-use your own code, and make a dumb mistake because it’s been a long time and you’d forgotten how to use it.

In some languages, you can create “abstract base classes” that allow other classes to share type, but cannot be used on their own. This makes it obvious to other programmers that they should look for subclasses and pick one – instead of trying to use the superclass.

Unfortunately, ObjectiveC has no support for “abstract classes”.

…or does it?

What’s an abstract class?

An abstract class is one that cannot be instantiated. To achieve that in Objective-C, all you have to do is:

@implementation DontAllowInit

- (id)init
{
    NSAssert(false, @"You cannot init this class directly. Instead, use a 
subclass e.g. AcceptableSubclass");

    // NB: I prefer to use NSAssert because this is aimed at programmers, and 
    //     ObjC programmers should generally be using assertions during dev!
 
    // You could instead use more fancy approaches, like raising an NSException
    //     - but Apple/Cocoa are very anti-exception, and don't support them well.

    return nil;
}
@end

…but this causes a problem. Because as soon as someone creates a subclass, they’ll find their code crashes:

@interface SubClass : DontAllowInit
@end

@implementation SubClass

- (id)init
{
    self = [super init]; // CRASH!
    if( self != nil )
    {
        // all the normal setup code
    }
    return self;
}
@end

You can workaround this by writing documentation that says:

/*
This class can’t be instantiated, because I wanted an Abstract Class, but Objective-C was too primitive to allow it.

So, um, please don’t call [super init]. Instead call … ah .. [super secretInit] which does the same thing, but which other people won’t realise exists!
*/

There’s an obvious problem there … the super-secret-init is easy to call anyway, and BOOM your library. It might seem obvious to you that no-one would call that method without understanding it, but this is the way of the world.

Selective Denial: what am I?

The solution is to think about what happens when you instantiate a subclass. The key thing here is that when you call:

Super* s = [[Super alloc] init];

it’s NOT the same as when you call:

Sub* s = [[Sub alloc] init];

…in the first case, the thing that gets sent “init” is an instance of “Super”, whereas in the second case it’s an instance of “Sub”.

That might not sound interesting, but when Sub executes the first (standards-compliant) line of its init method:

The key thing here is that when you call:

-(id) init
{
     self = [super init];

…then the code in Super.m is *not* being run on an object of type “Super”, but rather an object of type “Sub”.

And so we have a solution:

@implementation DontAllowInit
- (id)init
{
	if( [self class] == [DontAllowInit class])
	{
		NSAssert(false, @"You cannot init this class directly. Instead, use a subclass e.g. MyPreferredSubclass");
		
		return nil;
	}
	else
		return [super init];
}

Does it really matter?

When writing code, you have lots to think about. In my years of experience, two of the most important questions are:

  1. Does it do what it says / work as intended?
  2. Can someone else use (and modify) the code later, when you’re not there … correctly?

Documentation goes a long way to solving both those issues. However … docs take a long time to write, and more importantly:

Other people frequently don’t read the documentation

More importantly:

If you are a great programmer, other programmers SHOULD NOT NEED TO read the code documentation any more than they expected to

“Expected to” is critical here. If your codebase is 1 million lines long, then a programmer would be insane to think they could just “dive in” and start writing / modifying it – the thing is fantastically complex. But if it’s clear and simple, then often they should expect to read the “core” documentation, and be able to work the rest out as they go, from reading your class and method names.

Abstract classes enable you – with very little effort – to use complex chains of OOP subclassing without endangering the programmers who come after you.

Categories
advocacy community computer games design entrepreneurship games industry games publishing marketing MMOG development

Reaction to CoH (City of Heroes) community, and NCsoft’s response

(background: after 8 years as one of the world’s mid-tier MMO games, City of Heroes (+ City of Villains) is being shut down. The community banded together to ask if they could take over running the world that meant so much to them; NCsoft (the publisher, and a company I used to work for) said: no)

“No means no”

NCsoft is basically saying: “Please. We love you, but … you just *don’t understand*. It’s more complex than you could possibly imagine!”

That’s not a dialogue; it reads like a “this conversation ends when I stop talking” monologue.

“Why on earth wouldn’t you say yes?”

Lots of people wondering that. Obviously, being a public company, no-one’s going to answer that in public. We can only guess. But hear’s a few (over the top) suggestions…

If the community succeeds … then THE FEAR IS: some Executive(s), somewhere, are going to look like bad (I’m not accusing; I’m just saying that in corporates I’ve worked at, this kind of *fear* is common). A lot of the work they do is guess-work. That’s fine, they’re paid to make the best decision they can, while never truly know if they made the right one.

But if a bunch of inexperienced, eager novices come along and offer to do it for free. And – the worst possible outcome – they succeed … that could make someone look really bad.

Another thing I’ve seen in corporate politics at this level is a lot of “horse-trading”. i.e. sacrificing one project (that someone else resents, or has been snubbed by) in return for that person helping out out with a problem on a separate project, that you’re trying to rescue.

Who (individually or collectively) made the decision, and what did they stand to gain or lose? (they are probably worried about / aiming for / trying to win … something bigger than this single game. c.f. my 2009 post on why NCsoft is so huge a company gains nothing from “profitable” games, they need “mega profitable” games)

“Software is software”

Ha!

Has anyone found out yet what format(s) the data is in? Imagine the most insane, unwieldy, incomprehensible, inconsistent, unusable format that bears no relationship *at all* to the game itself … and you’re probably half way there.

This game was written *8 years ago*.

Read the biographies of the people involved. Were they non-game developers … academics with decades of expertise in distributed systems and real-time transaction messaging? … or … were they a bunch of smart guys trying to catch up with the academic research in the space of months, just enough to build and ship a major new computer game? And … most importantly … to make it “fun” before they ran out of budget.

I’ve not yet found an MMO where the people who made it feel – with hindsight – they had any idea what they were doing at the start. When they started, of course, many of them thought they’d covered all the bases, and were “well prepared”. Everyone tries their best up-front (or fails completely); but everyone finds it much harder than expected.

What should we/they do?

Looking at it analytically and logically, I’d give the community a very high chance of failing dismally if they were given the game. But … the eagerness, the excitement, the sheer determination: I’d give them a small chance of succeeding despite everything. Simply because: when you see this much determination, it often wins out and overcomes the obstacles in its way.

So, I say: Go for it.

They know the game they’re trying to (re-)create. The difficulty is simple: whenever you try to re-create a game, the temptation is always there to “improve” it … and 99 times in 100, you find you slightly misunderstood what you were “improving”.

Categories
fixing your desktop programming

Fixing Firefox: Prevent it quitting and losing all your work

Are you having this problem?

“I tell Firefox to ask before quitting, but it always quits without asking”

Especially on Macs, where cmd-w (close tab) and cmd-q (close window) are immediately next to each other…

Solution

It looks as though Firefox won’t fix this. It’s been 3 years, and it’s been reported to them plenty.

In the meantime… I found one solution that *does* work:

Plugin: “Always Ask” (tested on OS X Mountain Lion, with Firefox 16 – works fine)

Background / history

I find it interesting. The developers decide to remove features – but seem to have misunderstood how/why those were important to the userbase.

Fortunately, the Firefox team are extremely open with their process. Anyone can see the reasoning and the debate that goes into each change – and can comment on it themselves with their own feedback.

So, reading the bug reports on Firefox.com, you can build up a picture. My impression – based on reading a bunch of these reports over the past few years – is that it went something like this:

  1. “The ONLY reason someone would want to “not quit” is because they lose all open tabs”
  2. “We’ve changed Firefox so that it re-opens all tabs by default, all the time”
  3. “Therefore: we can remove the feature”

That’s pretty sound reasoning. Although (and I’m not sure why?) … it seems they forgot to remove the GUI that lets users *insist* on the original behaviour, and to remove the three about:config flags that let advanced users fine-control it. Those are all still there, but they don’t work (any more).

But … by browsing the open bug reports, we find a bunch more reasons why Firefox had the original feature (just a sampling from what I read) :

  1. Accidentally closing would lose your set of opened-tabs (OK, they understood that one!)
  2. Firefox has disabled long-term disk caching for the past 4+ years – this is a basic feature of web-browsers, but to workaround bugs in their implementation, where data could get corrupted, Firefox mostly disabled it. It’s proving a long and difficult task to fix it, partly because the rest of the browser has changed so much in that time, and partly because the original implementation was poorly designed (I’ve been watching the bug thread for 3+ years now)
    • …so, when you restart Firefox, ALL your data has to be re-loaded from the web.
  3. If you’re offline when you accidentally quit: BANG! You (often) lose EVERYTHING.
  4. Even if you’re online: Firefox reloads every page. This can take a lot of bandwidth, and a minute or more to complete on broadband … on dialup, or tethered iPhone, it can take minutes or tens of minutes
  5. Some webpages WILL NOT reload after quitting, as a security precaution (for instance: internet banking). If you were typing into a form … the data there is lost, forever, with no workaround
  6. On Macs / OS X: the “close tab” shortcut (used hundreds of times per day) is adjacent to the “quit Firefox” shortcut (used rarely)
    • …OS X has a feature where you can re-assign those shortcuts. If you use this feature, Firefox IGNORES THE OPERATING SYSTEM, and continues to use cmd-q (although it will allow you to use whatever else you chose “in addition”)
    • On Windows, Linux, etc – the shortcuts are very different, so that it’s very difficult to achieve this. In many versions, it’s impossible – only OS X has a global “kill everything” shortcut for each app

…which explains why people get so angry and frustrated at the removal of this feature from the browser.

In short:

  • ONE reason to do with another Firefox behaviour (the thing that Firefox authors “fixed”)
  • MULTIPLE reasons to do with a major Firefox bug that no-one can fix
  • ONE reason to do with online security (that’s very unlikely to change! The security problem will always be there)
  • ONE reason to do with the design of OS X

I suspect that the most common of these reasons is the Mac-specific one. So it’s very likely that a bunch of Firefox developers – who don’t use Macs – wouldn’t have been ABLE to see the problem for themselves. That underlines the importance of consulting your user-base…

Categories
amusing android bitching iphone

4 reasons NOT to install iOS 6

As a developer, I’ve been using iPhone’s since they first came out. I have to test my apps on every version!

iOS 6 is the first version of iOS “post Steve Jobs”. But it’s terrible – it seems to be a 2nd-rate product rushed out by a small team of startup programmers, working from their garage.

As a developer … I’m dismayed. Consumers are famously slow to change (en masse) – but they are neither stupid nor indifferent. Their tolerance is high, but not infinite. The iOS 6 experience is going to force a lot of people away from iPhones. Looks like we’ll be doing a lot more Android development in 2013 than I was expecting …

1. It will DELETE your photos

Yes, really. You can recover them (from what I’ve seen so far: all of them) if you use backup recovery tools. But seriously: WTF?

Many google hits for this, plenty on Apple’s own support forums, with no response from Apple.

Or … it will randomly delete half your photos (happened to a phone I saw).

Or … it will REDUCE the quality of all your photos until they become tiny pixellated blobs.

AND … photos taken after you upgrade iOS 6? Forget it – they’ll be inaccessible too.

Deleting people’s photos is – commercially – unforgivable. I was amazed the first time I saw this happen.

2. It crashes. A LOT.

Until iOS 6, Apple’s OS was getting better and better with each release. I don’t *try* to crash phones, but it happens accidentally when you use the phone a lot. But iOS 6 is a total disaster.

  • iOS 2: took me 3 days to crash it
  • iOS 3: took me 3 weeks to crash it
  • iOS 4: took me 3 months to crash it
  • iOS 5: …never managed to crash it…
  • iOS 6: took me 3 seconds to crash it

    To be clear: this is through normal usage, nothing special, nothing “developer-y”.

    The iOS 6 crash was 100% reproducible, triggered by simply moving an icon on Springboard to a differnt screen, and then hitting the home button. Wow.

    3. It removes GPS and Maps from your phone

    Apple’s “Maps” app simply Does. Not. Work.

    iOS 6 REMOVES Google Maps, and there is NO WAY to get it back.

    So, now … unless you buy an additional “mapping app” (and there are none that are as good as Google Maps, unless you spend a huge amount of money), then … that GPS chip in your phone, that’s part of the cost of the phone? For most people it’s now a hunk of useless metal.

    In the last 10 years, very little in mobile phones has changed the way people live their lives quite so much as the instant availability of detailed, accurate, maps with GPS no matter where you are on the planet.

    Apple says you can “use Google or Nokia maps by going to their websites and creating an icon on your home screen to their web app.”. Wow.

    4. You cannot return to iOS 5

    iOS 5 worked. It was stable. It had a GPS! and Maps!

    …but Apple forbids you from running it if you ever install iOS 6.

    As a developer, this has been a recurring nightmare: we had to make sure no-one ever upgraded a phone – even by accident. (as a developer: you test your app on every old version of iOS that you can. Not just on a simulator, but on each physical phone)

    Now consumers get to find out quite how (unnecessarily and unfairly) painful that process is…

Categories
computer games dev-process games design iphone

Made an iPhone game in 2hrs 15mins (native code)

How slow is making iPhone apps using native code?

You have to write HTML5, right, if you want FAST app development on iPhone? Or Unity? Or cocos2d?

Right?

Or … write it in Objective-C … a beginner-friendly “native” language: 2 hrs and 15 mins to create the artwork, design the game, code it in native Objective-C, debug it, and push to iPhone devices

NB: first half shows: “Collect the fish, avoid the dynamite, grow bigger!”
Second half shows: “if you hit dynamite, you shrink; when you’re tiny, if you hit dynamite, you’re fishfood :(”

For the love of … WHY?

Because I entered a voluntary “48-hour game jam” (you have one weekend to make a game), and last time I went to the Apple shop for a repair, they dislodged my network card. It fell out, internally, and it’s not user-fixable (believe me, I tried – even specialist screwdrivers aren’t enough :( ).

So I did something else with my weekend. But a few hours before the competition deadline, I figured “what the heck; what could I do in a couple of hours?” … with some encouragement from The Mighty Git.

The code?

222 lines of code, including comments, blank lines – and code that I commented out because I replaced it with other code.

That’s all it takes for a working, playable, iPhone game.

…and the art?

You can’t see it from the video, but the art is resolution-independent – as your whale gets bigger, it re-renders, so that all the curves ALWAYS have razor-sharp edges. No effort required on my part.

I did all the artwork in Inkscape (free image editor for vector images), and saved as SVG (web-standard for vector images).

Then, courtesy of the open-source SVGKit project (renders vector images on iOS, because Apple doesn’t add support to their libaries – shame), and the following few lines of code:

	self.sivWhale = [[SVGKImageView alloc] initWithSVGKImage:[SVGKImage imageNamed:@"whale-1.svg"]];
	sivWhale.frame = CGRectMake( 0, 0, sivWhale.frame.size.width * sivWhale.scaleMultiplier.width, sivWhale.frame.size.height * sivWhale.scaleMultiplier.height );
	sivWhale.center = CGPointMake( self.view.frame.size.width/2.0f, 0.75f * self.view.frame.size.height );
	[self.view addSubview:sivWhale];

If that looks rather like using a built-in UIImage and UIImageView … it’s because it’s intended to. SVGKit adds a new type of image – SVGKImage – that’s almost the same as an Apple UIImage, except it’s better (it’s resolution independent). And the SVGKImageView does for SVGKImage what UIImageView does for UIImage…

Want the code?

Sadly, the version of SVGKit I used here has some bugs in it – it’s live at: https://github.com/adamgit/SVGKit/tree/transforms – but until it’s been tested and fixed by the SVGKit maintainers, it won’t appear on the main SVGKit project page.

So, feel free to use that link and play with it – but be warned: it’s NOT as stable as the main SVGKit. Yet.

Categories
games industry games publishing

The real cost of game-consoles (inflation-adjusted)

35 years of game-consoles, and their original retail price, adjusted for inflation:

i.e. a (reasonably) direct comparison of how expensive they were at the time they were launched.

Some quick observations:

  • NeoGeo and 3DO/Jaguar were insanely expensive – and, of course, sold very poorly and went bye-bye.
  • Until the Wii and the GameCube … Nintendo’s NES and SNES, and Sega’s Master System – the best-selling consoles of the goldern-era – were almost the cheapest ever launched.
    • (I’ve long argued that hardware price is one of the biggest factors in the sucess of a console, so I’m biased and cheering for this ;))
    • PlayStation 2, which kept up the immense sales trend of PS1, was slightly cheaper, following the curve down. PS3 bucked it … and sales were disappointing.
    • This chart lists *only* the launch price, it doesn’t say anything about the deep price drops over the consoles’ lifetimes; “price” is the main thing a platform owner can change after launch (changing the hardware features / design is almost impossible)

    (Found via reddit, but no link to that bad person, because they linked the image without credit / citation. Grr!)

Categories
iphone programming

Updated: Static library for iOS – automatically cross-platform

I’ve just updated my script that adds a missing feature to Xcode – making your libraries automatically work on Simulator and on Devices, with no manual intervention:

http://stackoverflow.com/questions/3520977/build-fat-static-library-device-simulator-using-xcode-and-sdk-4

…and I’ve also put the script itself into GitHub as a gist you can easily copy/paste:

https://gist.github.com/3705459

Categories
Google? Doh!

Google continues to delete user data … grr.

Just lost some work because Google staff still don’t understand this idea that “the internet is a non-reliable network”.

(Google Docs simply deletes your data – retroactively – if the internet connection goes away. It’s that “retroactively” part that’s the killer)

Makes you wonder what calibre of engineer they’re employing these days :(.

Categories
amusing usability Web 0.1

GitHub User-Interface: admission of failure?

Screenshot taken straight from the official blog post:

You see, they wanted to add a feature where you could “watch” a repository.

Only … due to some weak design (or perhaps: technology-led) decisions in the past, they already had a feature with this name, which didn’t really do what it claimed to do. Rather than fix it … they added a meaningless button that does what the existing button (Watch) pretends to do. So now, when you want to watch a project, you must NOT CLICK the Watch button, with its excellent icon, but instead the “burning lump of gas” button. Um.

Here’s a hint: if you’re designing a UI, and at any point you decide:

“STARS! Starring items is the answer!”

…and the question was anything other than “how do we Rate items?”, then: you’re wrong. Try again.

(PS: they’ve also fixed the extremely annoying long-time bug that people could raise Issues, or Comment, on your repository – but you’d never find out, again because of technical decisions / implementation issues on their system. Apparently alll fixed now. Yay!)

Categories
programming project management

Source Code: never distribute an app as “source code”

I just ran into a 2004 piece of FREE software that I wanted to use, but can’t, because of poor choices by the original author. I’m posting this because I think the ideological reasons behind those choices are now “of historical interest only” and I’m liable to forget them completely a few years from now – but the underlying issues remain.

Especially in a world where Android, on a marketing platform of “openness”, is competing with iPhone, on a platform of “all users are lazy or idiots”.

(As a user, I hate being treated like an idiot. Except when it means a computer does all the work for me. Fair? Reasonable? Nope! When it comes to users … developers can’t win :) )

How should you distribute an application?

There used to be a raging debate, for decades, about the “correct” way to distribute applications. A bunch of well-meaning (but IMHO un-wise) Open Source programmers advocated:

“The only way to distribute a program … is as raw Source Code”

This was not about “is Open Source good?” – this was *in addition to* making source available. The question was: should you send people a copy of the source – or should you compile / package binaries (one-click applications) for people to download and “just run”?

The debate seems to be dead (finally), with the world coming down on the side of practicality, rather than theory/ideology. I’m not entirely happy with that – but it always felt obvious to me that it would go that way. I think the App Store in particular has gone a long way to “proving” it once and for all: people who want apps … want apps. They don’t want source code. Even if having the source would sometimes help.

2012: a worthy project that’s dead and useless

Today I ran into a tool that concretely demonstrates the futility of the “only source code is correct” argument: SLOCcount

The project as it stands is unusable unless you happen to be running one of the two linux distros where people have built the binary – or you’re willing to waste anything from “hours” to “days” of time “configuring” the app.

(with a normal app, that “hours of time” is replaced by “0.1 seconds it takes to double-click the app icon”)

This is a simple command-line tool. To run it, you must:

  1. Download the source
  2. Read the usage instructions
  3. Ignore the usage instructions. Start again with the “installation” instructions
  4. Install “make” (takes 0.5-3 hours)
  5. Learn how to use “make” (takes 1-3 days, if you don’t already know it)
  6. Debug “make” (takes 0.5-3 hours)
  7. Re-write the config files for the project so that they will work with “make” (takes 0.5 hours)
  8. (probably) install a new compiler (takes 0.5 hours)
  9. (probably) install a new linker (takes 0.5 hours)
  10. Cross your fingers, pray to whatever Gods you believe in, sacrifice a lamb, etc
    • In case you’re unfamiliar with “make”: it typically doesn’t work on any computer except one identical to the one where it was originally tested, so you have to go through and keep tweaking and fixing until it works. Kind of. There’s no checks-and-balances – so you NEVER ACTUALLY KNOW if make has worked, you just have to hope.
    • Finally: go to Step 1 of the usage instructions, and try to use the app

No wonder people don’t use it. No wonder people don’t update it, even though it’s “Open Source”. No wonder this – otherwise useful tool – is effectively dead.

Fundamental problem?

OK, so the straw-man example above mainly comes down to:

“make” is the world’s worst configuration tool

SLOCcount was probably killed by the choice of bad tools, as much as anything else. But – how much choice did the author have, really?

The problem is – and this is the interesting point of this blog post:

(in general) Source Code does not fully describe a program; it merely describes “SOME OF the internals of a program”

To create an actual usable program you need something like (off the top of my head, I think this is correct?):

  1. Source code
  2. Programming language definition
  3. Operating System (OS)
  4. Compiler program
  5. Programming language libraries
  6. OS-specific Linker
  7. Launch wrapper

…where the final output of step 7 is collectively known as “an application” (or just “an app”).

The folks who used to argue that all code should be distributed as Source tended to use arguments about the “value” of Source Code, as if it were a valid substitute for all the above items. It was never a *substitute*, although getting the output AND the Source would have been better than the tradition of only receiving the output.

Of course, even better would have been: receive all 8 items above (all the things necessary to make the app, and the app itself).

A brighter future

And so … if you’ve ever wondered what’s inside package-management sytems … take that list of 7 items above, and go revisit your favourite system of choice.

And … bear in mind that all the above things can have complex version dependencies – e.g. “only works with library A, version greater than 2.3, but less than 2.6, or with library Aa version 7.99 exactly”. A package-manager has rather a lot to handle…

Categories
fixing your desktop Google? Doh!

Gmail NEVER LOADS in Firefox? Cookies cleared, still get a blank screen? Try this…

After trying 4 or 5 things from this several-years-old page on Firefox’s support forums, I finally hit upon one that worked:

“For the heck of it tonight I clicked on Gmail in my calendar and it finally went to my gmail inbox.”

In my case, I just went to Google Groups, and clicked the Gmail link from the top navbar. Lo and behold – it works!

(this is after a week of having no way to access Gmail from my main web browser)

Why?

Looking at the problems other people have had, my guess is that Google’s code for running Gmail has some illegal (i.e. breaks-the-standard) bits in there that try to get around the browser standards by doing silly things with caching. Some of those are … fragile, perhaps … and tend to break easily. Over time, Google has added more and more unnecessary “features” to that code (e.g. I often have to wait for “connecting to google chat”, even though I have google chat permanently disabled and will never EVER use it) … there’s a *lot* of code in there these days; lots that could go wrong!

Normally, a Refresh of a page would fix this – that’s how the WWW was designed in the first place, as a core feature – but Google’s (my guess) playing so fast-and-loose that they’re *also* (deliberately? or accidentally?) bypassing the refresh. I can imagine several well-meaning reasons they might do that, but in the end I’d rather they stuck to the standards, instead of creating these “breaks permanently” problems for people.

And, of course, there’s no Google support for this problem. Once it strikes you, GOOD LUCK! (you’ll need it)

Categories
fixing your desktop

iMac crashed; wouldn’t turn on! (black screen)

At 27″, it’s too big to “simply” take in to the Apple store. In desperation, I followed this support article from Apple that’s for older iMacs and officially no longer supported.

Which is a pity, because in typical Apple fashion, they’ve deprecated an article that’s still accurate and useful. Following the steps in that article (copy/pasted below in case Apple ever deletes the webpage – something they’ve a habit of doing, sadly), my fans roared to life and the iMac REVIVED!

Resetting the SMU can resolve some computer issues such as not starting up, not displaying video, sleep issues, fan noise issues, and so on. If your computer still exhibits these types of issues even after you’ve restarted the computer, try resetting the SMU. To reset the SMU on one of these iMacs:

Turn off the computer by choosing Shut Down from the Apple menu, or by holding the power button until the computer turns off.

[NOTE — in my case, OS X had crashed completely, so there was no “turning off” to do – I had to yank the power cable out, and after that, it wouldn’t switch on]

Unplug all cables from the computer, including the power cord.
Wait 10 seconds.
Plug in the power cord while simultaneously pressing and holding the power button on the back of the computer.
Let go of the power button.
Press the power button once more to start up your iMac.

I then had to repeat the process – but with a *fifteen second* wait, as per the support page that seems to supercede the article above – to get the fans to shut up.

Categories
games industry games publishing

Jon Blow: “almost no certification process for iOS”. LOLz.

Jon just published an interesting letter about the current state of cert processes for game consoles / platforms.

There are some real problems with certification today. Unfortunately, Jon’s post doesn’t really touch upon them, and seems to go instead after the IMHO untrue and unhelpful claim that iOS is better for having “almost no certification”. No, really:

“The certification processes of all these platform holders were based on the idea that all these steps they test are absolutely necessary for software to run robustly, and that software robustness is super-important for the health of their platform and its perception by customers.

But, look at iOS. There is almost no certification process for iOS, so by the Microsoft/Sony/Nintendo theory, the apps should be crashing all the time, everyone should think of iOS as sucky, etc. But in fact this is not what is happening. There is no public outcry for more testing and robustness of iOS software.”

Personally, I’d say that iOS has a certification process of comparable length to console cert, given the comparitive size/complexity/many-years of development in the apps, and for a couple of years it was considerably nastier than TRC’s because *it had no documentation*.

(my first hand experience: I created and maintained a large site that documented the app rejections by Apple, and interviewed the developers on what got rejected, why, what happened after the rejection, etc.)

Even with the nightmare of never knowing what the rules were, there was a positive net effect: many apps were forced to resubmit until they hit a minimum barrier of quality. Again – I know this for a fact, I had many conversations and interviews with developers about this, often getting to read their conversations with Apple. Even today, there are many apps being rejected every week for failing on basic quality / functionality / crashing / etc.

For me, that rather undermines his argument. Which is a pity, because there ARE major problems with cert – on all platforms, Apple included – and we should be focussing on them. But it’s not the idea of cert that’s at fault, it’s either the choice of items (e.g. Sony, where some of the rules come from PlayStation 1 era and are barely relevant today) or it’s the poor implementation of the process (e.g. Apple until 2011), or it’s the big chunks of stuff that SHOULD be part of cert but isn’t (…everyone…).

Categories
iphone startup advice web 2.0

Rise and fall of Microsoft’s hegemony over Apple

Building and Dismantling the Windows Advantage – a great article, telling the story in a mix of words and graphs.

“The consequences are dire for Microsoft. The wiping out of any platform advantage around Windows will render it vulnerable to direct competition. This is not something it had to worry about before. Windows will have to compete not only for users, but for developer talent, investment by enterprises and the implicit goodwill it has had for more than a decade.”

Categories
android community entrepreneurship facebook Google? Doh! marketing and PR startup advice

Google’s Strengths & Weaknesses in 2012

In the past, I’ve had terrible advice from brilliant people. The best way to avoid that is to be careful to research the brilliant person and tailor your questions to avoid their weaknesses.

Tomorrow I’ll be meeting a bunch of people at Google London’s open day. I started by writing down a list of known strengths/weaknesses based on my knowledge and experience of the company and the people. Earlier this year I had some in depth meetings with Facebook, which gave me a fresh perspective on the similarities and differences. I think the list itself is interesting – modulo: it’s only my personal impressions:

google strengths

[comments in brackets to clarify some non-obvious points for anyone reading this]

  • innovating on the Web
  • bringing native tech to Web and making it as good as native
  • software development
  • worlds biggest/most popular search engine
  • …? focus on curation ?… [Page ranking etc is subtle curation]
  • tech brand associated with “quality”
  • massive scale advertising
  • algorithms for automating heuristic tasks (imperfect, vague domains)
  • enormous scale data manipulation
  • throwing hardware at impossible problems to make them possible [Street View]

google weaknesses

  • community [in general, but also specifically: Google Groups]
  • consumer marketing [many Googlers have said “we don’t need to; the brand is enough”]
  • building products that people want, rather than products Google staff enjoy [Wave, Buzz, Google Voice]
  • understanding consumers [Android]
Categories
MMOG development

Dropbox tech scaling – some great, some not-so-great

“I was in charge of scaling Dropbox … from roughly 4,000 to 40,000,000 users. … Here are some suggestions on scaling”

The first section is a WTF – the guy advocates deliberately over-taxing your servers, without a good explanation. I’ve got some guesses at why they did it – but in the general case I’d say: never do this. Only do it when it’s obvious, because you have a specific reason to do so (and you really know what you’re doing).

The rest is a lot clearer, good advice there.

Also, IMHO worth reading for this part alone:

“Let’s say you’re trying to debug something in your webserver, and you want to know if maybe there’s been a spike of activity recently and all you have are the logs. Having graphing for that webserver would be great,

[ADAM: but … often you don’t have the right set of graphs set up, and it takes a while to do that – no use if the server is in trouble *right now*]

Apr 8 2012 14:33:59 POST …
Apr 8 2012 14:34:00 GET …
Apr 8 2012 14:34:00 GET …
Apr 8 2012 14:34:01 POST …

You could use your shell like this:

cut -d’ ’ -f1-4 log.txt | xargs -L1 -I_ date +%s -d_ | uniq -c | (echo “plot ‘-’ using 2:1 with lines”; cat) | gnuplot

Boom! Very quickly you have a nice graph of what’s going on, and you can tailor it easily (select only one URL, certain time frames, change to a histogram, etc.).”