2010-12-22

Look Ma! I've made an earth!

Operation Survival needs a three dimensional view of the whole globe. I have managed to get a first prototype working: Try it here.

Lots of learning was involved before I got this far:
- evaluating and deciding on a flash 3D engine (Away3d)
- efficiently implementing and caching filters for large images
- perspective projections. Note for next time: you do _not_ see half the sphere after projecting it to 2D but significantly less
- The day/night line is calculated and updated in realtime (let the program run for a while to see it). Some funky math and astronomy was required to get this just right
- Quaternions. For smoothly rotating the camera around the globe without running into gimbal lock
- lots of Actionscript idiosyncracies, this is my first flash program after all

I'm quite satisfied with the results. Now I need to add stars to the background and figure out how to best draw flight paths and markers on the surface of the sphere.

2010-12-05

ActionScript pie menu

Since the flash player hijacks the second mouse button for its own context menu and there are rumored to be some obscure, barely relevant platforms like, ahem, Apple, that don't offer one to begin with I needed to come up with another solution for context menus. I came up with pie menus that open when you click and hold the left mouse button. You can choose a button by moving over it and releasing the mouse.

It features smooth tweening transitions and fades, a subtle 3D effect, a text help when hovering over an option for a second and an arbitrary number of buttons and icons. The final version will likely also have sounds.

This is all plumbing work for Operation Survival for which I want an intuitive, non-intrusive user interface.

Please try it and tell me what you think!

2010-09-16

Performance Rant (part 2)

Last time I argued performance is still as relevant as ever. This time I'm gonna point out some common coding mistakes to watch out for.


First and foremost: Know your datastructures. This should be trivial computer science 101 material but I'm amazed at how often people get this wrong. Understand BigO notation for worst case runtime and memory behaviour of your algorithms. Understand why worst case isn't all that matters and that the little omitted "constant factor" may just make an O(n log(n)) algorithm a better choice than an O(n) at times. Be aware of the tradeoffs in your particular situation and make a conscious descision. Be wary of programming languages and libraries that blur the distinction and consider it an unimportant implementation detail. There is no such thing as an "ArrayList" and suggesting otherwise should be a criminal offence (yes, I'm looking at you Java). In the same vein, there is a huge difference between an associative container and a plain array. Be very suspicious of languages that treat one like the other and allow indexing arrays with strings (PHP...). And if you do decide to use associative containers (i.e. maps or dictionaries) know whether you are dealing with hash based or tree based implementations. That's a non trivial difference with potentially huge consequences for your code!

Choose the proper tool for the job. Some programming languages are good for one-off scripts or quick prototypes. Be careful trying to morph a proof of concept prototype developed in a fast-to-develop-in but slow-to-execute language into the final product. There is a real and inherent speed difference between different programming languages. And it's not trivial either but can be up to 100x. Trying to optimize a slow program in a slow language is possible, but is like making a top athlete run a marathon in rubber boots. He'll probably still arrive, but why not start in proper running shoes to begin with?

Choose the proper default case when designing your interfaces and decomposing your problem into functions. The common mistake here is to do one matrix-vector multiplication, draw one sprite, measure one line of text, perform one ray-polygon intersection. But is that the most common usage scenario of your functions? Really? Isn't it rather that you usually have lots of similar items and n=1 is the exception rather than the rule? Transform all vertex positions by the view matrix, draw all enemy UFOs, measure the whole text box content and hit test against the full environment? If you design your interfaces to work in batches you'll increase locality, minimize graphics hardware state changes and reduce redundant/repeated set ups and tear downs. Too many libraries get this wrong.

I've mentioned this in the previous paragraph, but it merits repeating: Locality is really important. Keep related data close together in physical memory. Compared to the speed at which modern CPUs operate main memory accesses are glacially slow. You want to take maximum advantage of the CPU's cache hierarchy. The best way to really screw up the cache is to scatter memory accesses all over the place and intermix write and read accesses. This has some interesting consequences. First, simple (sorted) arrays often beat more complex datastructures although theory and BigO analysis would indicate otherwise. Second, classical object oriented object hierarchies are your enemy. Traversing down a nested tree of objects in a loop to call render() on the leaf nodes is really expensive. Keep your hierachies flat and put related objects together.

I've riled against garbage collection before, but this is another reason why you should avoid it. If all your objects have to reside on an automatically managed heap you are completely at the mercy of your runtime environment to make the right decisions which objects to put where.

I could go on for a while, but I think this is a big enough wall of text for one post. Fear the next installment!

PS: Do follow the links - they point to articles, talks and slides of people far more knowledgable and eloquent on the topic than I can ever hope to be.

2010-09-15

Performance Rant

I recently got on my performance soapbox again. I don't like repeating myself endlessly and my therapist advised me to write it all down to get it off my chest. So here it goes:


First of all, with today's computers performing some billions (as in 9 zeros, 1.000.000.000) operations per second does performance even matter? The answer must be a resounding YES for a multitude of reasons.

The term "computer" today covers a huge magnitude of devices with very different performance characteristics and capabilities. Think IBM mainframes, high end server hardware, office desktops, laptops, cell phones, GPS devices, ... Even if your high end Alienware gaming (ahem, "development") rig doesn't break a sweat doesn't mean others won't.

The free lunch is over. Programs once written will likely not get faster automatically with the advent of new hardware. It used to be that each new generation of CPUs would boost performance of existing software. Not any longer. We seem to be approaching some physical limits of how much further we can push CPUs. So we are sidestepping the issue and investing in parallel architectures. But software must be explicitly designed to take advantage of this, which is not easy. Cars aren't getting any faster, we are building wider highways instead.

Problem sizes keep growing. We keep demanding more of our machines. Text chat isn't enough anymore, it must be a live video conference with 10 participants. Games are approaching Hollywood blockbuster movie special effects quality, complete with voice acting, motion capture and serious camera work. A simple word document (yeah, yeah, I mean tex of course ;-) ) for a home work assignment would have made a professional typesetter proud a mere generation ago. Megabytes are trivial, we are counting Gigabytes today. All the way up to "google-scale" problems with "FuckYeahBytes" and computing power measured in acres.

User expectations have evolved. A keyboard and console isn't enough any more. We need animated, immediate feedback, graphical user interfaces. Voice recognition. Gesture recognition. Touch devices. Since there is such an endless plethora of applications to choose from our tolerance for lag is shrinking. If an app annoys us we simply switch to another.

Mobile devices and battery power. Inefficient code uses more CPU cycles than necessary, draining batteries. Desktop computers are getting smarter, dynamically reducing CPU energy usage based on current load levels. Efficient code saves energy, saves the planet: "green" computing!

Concurrency. Users are increasingly accustomed to a multitasking environment. Type your homework, watch some youtube, chat with your buddies, listen to music, download some torrents and tab in and out of WoW all at the same time. Or have a dozen browser tabs open each containing flash and media heavy interactive content. What about running multiple operating systems in virtual machines? Even multiple GBs of RAM can quickly be exhausted in such a scenario. Applications need to be good citizens and not assume ownership of all of a machine's resources.

Servers. The modern web experience requires massive server infrastructure behind the scenes. Efficient server code has dramatic effect on the bottom line. If a performance improvement of 10% means you have just saved 20.000 additional servers that's a huge win in maintenance and acquisition costs. Take facebook as an example, where choosing C++ over PHP could have just provided such a boost. And indeed, they have built a custom PHP to C++ translation engine (Hiphop).

So yes, performance is as relevant today as it has ever been in computing. And it gets honored by the marketplace as well. Slick, smooth user interfaces are appreciated - witness the success of Apple or Win7 (and it is no accident either but a very conscious effort).

Next up: my performance pet peeves.

2010-09-11

Summits

.

5895mKilimanjaro10-Feb-2012

.

4153mBishorn21-Sep-2013

.

4010mLagginhorn30-Aug-2011

.

3206mJegihorn29-Aug-2011

.

3162mGrosser Diamantstock31-Aug-2013

.

3124mFanellhorn25-Aug-2012

.

3080m Zwächten15-Jul-2013

.

3022mFaltschonhorn26-Aug-2012

.

2928mUri Rotstock27-Nov-2011

.

2904mVrenelisgärtli25-Sep-2011

.

2901mRuchen25-Sep-2011

.

2802mBös Fulen24-Sep-2011

.

2794mGross Chärpf17-Sep-2011

.

2717mOrtstock11-Aug-2012

.

2709mChrüzlistock9-Mar-2013

.

2515mChaiserstock2-Jul-2011

.

2502mSäntis6-Nov-2011

.

2491mFulen2-Jul-2011

.

2478mVanatsch10-Mar-2013

.

2461mRossstock2-Jul-2011

.

2448mGirenspitz6-Nov-2011

.

2436mGufelstock23-Jun-2012

.

2404mBrisen23-Jul-2011

.

2385mSchwarzstöckli23-Jun-2012

.

2319mSilberen1-Aug-2013

.

2299mSchilt23-Jun-2012

.

2283mRautispitz22-Oct-2011

.

2282mWiggis22-Oct-2011

.

2279mBrisi27-Jan-2013

.

2256mGumenstock (Rautispitz)5-Jun-2011

.

2128mPilatus29-May-2011

.

2128mTomlishorn5-Sep-2013

.

2124mFronalpstock9-Jul-2011

.

2097mZindlenspitz10-Sep-2012

.

2082mPlattenberg4-Dec-2011

.

2076mWidderfeld5-Sep-2013

.

2075mRossalpelispitz10-Sep-2012

.

2043mSchiberg4-Dec-2011

.

1965mBiet3-Mar-2013

.

1950mSpeer16-Jun-2013

.

1935mChlingenstock9-Sep-2012

.

1922mStäfeliflue5-Sep-2013

.

1921mFronalpstock9-Sep-2012

.

1917mMittaggüpfi/Gnepfstein5-Sep-2013

.

1907mKlimsenhorn5-Sep-2013

.

1904mHuser Stock9-Sep-2012

.

1899mMittlerspitz7-Apr-2013

.

1898mGrosser Mythen2-Sep-2011

.

1898mStanserhorn17-Mar-2013

.

1865mFederispitz3-Jul-2011

.

1818mGmeinenwishöchi10-Nov-2012

.

1816mNeuenalpspitz10-Nov-2012

.

1802mBlaue Tosse5-Sep-2013

.

1797mRigi28-May-2011

.

1787mRiedmattstock3-Feb-2013

.

1782mStockberg4-Nov-2012

.

1777mBrüggler27-May-2012

.

1764mPlättlispitz3-Jul-2011

.

1736mSelispitz3-Feb-2013

.

1703mChüemettler16-Jun-2013

.

1698mRigi Hochflue4-Jun-2011

.

1600mStock10-Feb-2013

.

1580mWildspitz20-Jan-2013

.

1566mNäbikenfirst9-Dec-2012

.

1559mGrossbrechenstock9-Dec-2012

.

1531mRegenegg10-Mar-2012

.

1479mNüsselstock9-Dec-2012