19 3 / 2014



Ahhh… the days are getting longer, weather is warming up and birds are chirping once again. This can only mean one thing: ‘tis the season for game conferences!

Hot on the heels of this week’s GDC in San Francisco, Beijing’s own Global Mobile Game Congress (GMGC) will kick off from March 26-27th and bring together some of the biggest names in mobile gaming.

Maxim de Wit, Director of International at Beijing-based GMGC, took time this week to shed some light on what’s happening in mobile gaming.

image

Connect with Maxim

1 - What are the hottest topics in mobile gaming this year?

The effect that more widespread adoption of 4G will have on the mobile games market is definitely front-of-mind. Especially for those studios itching to rollout mobile versions of multiplayer games that require higher connection speeds and access to online game servers from handheld game consoles, 4G is big.

Another hot topic is smart TVs. Connected TVs represent a whole new opportunity for game developers to invade the living room. They’re not ‘mobile’ devices in the strictest sense, but they’ll definitely occupy a seat in the device mix as the model of a persistent universal account across different devices matures.

King has been a leader and reaped the benefits of minding cross-platform persistence, for example by allowing Candy Crush players to login, play and auto-save their current status and game on iOS, and then pick up again when they start playing again on Facebook. 

2 - How does Apple’s $32 million settlement of the IAP lawsuit affect game studios and monetization strategy?

The settlement really affects the app stores more than the game studios, as they’re the party responsible for providing a secure and foolproof transaction technology, to ensure children cannot make in-app purchases without their parents’ approval and other activity. 

Las year, Apple set aside $100 million for in-app purchase refunds, so even when you add in the January settlement it’s really a fraction of the App Store’s $10 billion in 2013 sales. The whole story has brought important issues surrounding mobile transactions into the public sphere, but we really don’t see it having a significant effect on the in-app purchase business model.

3 - Everyone’s talking about WhatsApp, LINE, WeChat, Viber… How is the rise of messaging apps changing the mobile gaming landscape?

One word: Discovery. Messaging apps like LINE, Kakao Talk and WeChat are changing the way new games get marketed and discovered. They’re essentially creating a surging new acquisition channel for mobile game studios (and really any mobile app businesses).

The social nature of messaging apps is really exciting, because they have the viral potential to push games to the top of the charts enhance the experience for games with strong social features. Here in China, two great early examples of games that have leveraged WeChat to this effect are the endless runner Gunz Dash (天天酷跑), and Guitar-Hero-like Jiezou Dashi (节奏大师).

4 - This is the second year for GMGC, what will be different this year from 2013?

In 2013, GMGC took a lead in uniting the domestic Chinese mobile gaming ecosystem by organizing events, summits and dinners to stimulate networking, collaboration and discussions on the future of the industry. We’ve learned a lot and it’s been really rewarding to play our part in the fantastic community of game developers, artists, entrepreneurs, investors and executives here in China.

This year, our focus is to expand our platform and really serve as a bridge for foreign companies to explore China as among the world’s most exciting gaming markets, by finding the right local partners. We’re also preparing to organize events in Southeast Asia for companies to connect and explore opportunities in those burgeoning markets.

14 3 / 2014



Ikura Media has built a culture of product development that is driven by data and experimentation. Instead of relying on intuition and decades of game development experience, founder Stephen Price knows that the truth lies in the numbers.

This week, he took a break to talk to us about how he developed SplitPong - a modern take on the retro bat-and-ball classic - in order to maximize his ability to optimize the game based on user data.

Founder of Ikura Media, Stephen Price

So many of indie mobile app developers’ #1 challenge is monetization. How have you solved this problem with SplitPong?

I’m not sure that we’ve necessarily ‘solved’ the monetization challenge, but we’re certainly taking a proactive approach to understanding what works better in terms of driving revenue.

SplitPong offers in-app purchases for people to buy premium themes for the game. In order to maximize the rate at which free users become customers, we’re A/B testing different colors and copy on the premium theme purchase page of the app.

SplitPong for iOS runs A/B testing to optimize user engagement and in-app purchases.

We are also evaluating advertising as a monetization strategy and are looking at some of the mobile ad networks to see which is best suited to our game and market. We may even A/B test a couple of them and see which is driving more ad dollars!

What are the key metrics SplitPong is optimizing through A/B testing? How were they selected?

We were interested in leveraging A/B testing to increase 1) user engagement 2) in-app purchases and 3) retention of users. So recency, length of game, number of points scored per game, purchase conversion rate, and session length are the specific metrics we’re tracking.

We chose these because they are a direct measure of the health of our business.   Also, they’re relatively straightforward to measure but the insights they give are extraordinarily powerful.

So what variables are you testing?

We’re running two separate experiments in SplitPong: one on game physics and the other on the in-app shopfront. In the first experiment, we’re testing paddle size, ball speed, and the speed of the computer. In the second, we’re basically testing different copy and button colors on the in-app purchase page.

One variation of the in-app purchase page being tested in SplitPong.

So far, the results have led us to believe that that making the games too hard or too easy adversely affects the frequency of play - the sweet spot is somewhere in the middle where it’s just challenging enough to be interesting, but not so challenging that the users can’t get better. We see this with a number of popular games, such as the recent craze around Flappy Bird - it’s a pretty tough game, but is very sticky because it seems easy to get better the more you play.

What are the opportunities for A/B testing in iOS?

The game-changing opportunity with A/B testing for mobile apps is that pretty much any variable can be experimented with to drive improvements in quantifiable user behavior.

So the possibilities abound: Inclusion/exclusion of a feature, UI layouts and UX workflows, colors, copy, pricing, frequencies of alerts or in-app messages. Any time you are presented with a tough decision that requires you to make an assumption about your users’ preferences or behavior, A/B testing can lead you to the right decision.

As experienced developers, product managers, marketers, whatever your field of expertise it’s hard to not go with your gut and take a secular approach to your business. But we’ve seen that leaving emotions on the shelf and emphasizing the value of hard data is the way to go.

My advice to anyone building a mobile app: Anytime you choose some feature/UI/workflow because you believe it will get your users to do what you want them to do, take a small risk and test another choice on 10% of your user base.  Many times you’ll be right and can roll out A to the remaining 10% with little harm done, and this loss is more than compensated by the times that the data contradicts your gut and you discover a hidden boost to your business.

12 12 / 2013



While everyone else was recovering from food coma after Thanksgiving, the@WalmartLabs team was crunching numbers from the A/B tests they’d been running on their mCommerce apps deployed for Black Friday.

With the flurry of new A/B testing tools for mobile apps (SplitforceArise.io and Apptimizeto name a few) that have come to market with enterprise-level offerings in the past year, it’s hard to imagine the benefit to Walmart of developing their mobile A/B testing in-house.

Let’s assume it took an @WalmartLabs team of three engineers three weeks (9 man-weeks) to hack initial mobile A/B testing for their app. At a low-ball estimate of $120,000 annual salary for a mobile engineer in the Bay Area, that’s an initial cost of approximately $20,000 (excluding costs of employment). Beyond the initial $20,000 cost of an in-house development effort, the team will need to spend an additional 6 weeks (18 man-weeks) porting the solution to native platforms like iOS and Android, and then about 3 man-weeks per month for ongoing maintenance of the libraries, crunching data from their A/B tests to determine statistical significance and winning variations. By this estimate, the cost to Walmart of developing a true mobile A/B testing practice in-house is around $150,000 annually.

And that doesn’t take into account the  opportunity cost of having those engineers develop some other mobile business intelligence solution which can’t be pulled off the shelf from existing providers. The point here is that it’s inefficient to reinvent the wheel,and Walmart would benefit from using a third-party enterprise-level service even at a monthly cost of $10,000.

Technically, the main thing Walmart seemed to have focused on is the size of the manifest – a pretty pointless optimization given the average speed of data connections these days. Even with a large manifest set, existing solutions don’t add much to the download time for mobile shoppers located in areas primarily served by 3G and 4G or high-speed wireless connections.

Specifically, it seems the team at Walmart has put a lot of work on identifying A/B test cohorts as efficiently as possible. While it does make sense to tie-in existing GUID with a cohort identifier, it’s likely that this approach is over-complicating the process of applying an experimental variation. A conceivable workflow for this type of setup looks something like this:

Instead, it would be easier to fetch all the data and select a cohort on the client-side in order to apply a variation. Something like this:

In the very simple case when there is only one experiment in the app, the amount of data downloaded would be the same and both approaches are valid. But for more complex cases where multiple experiments are being run throughout the app, the latter approach will still only require one roundtrip and result in a better performing app that does not require opening a new connection every time a variation needs to be applied.

Generally, specialized third-party solutions for mobile A/B testing like those mentioned earlier have been designed to tackle the complex issues surrounding A/B testing mobile apps, like dynamic updating of in-app content and targeting variations to specific languages or geographies. Even for the enterprise, developing and maintaining proprietary analytics and A/B testing tools that already exist in the market results in wasteful overhead that could be deployed to making the business better in other ways.

09 12 / 2013



We’ve all heard about the crazy monetization success of mobile games like Candy Crush, Clash of Clans or Puzzles & Dragons. But how did they get there? Did the game designers and product teams behind these blockbusters simply strike the UX jackpot? Or was it all part of a calculated, data-driven strategy met with timely execution?

With the advent of release-software and social games that regularly connect to the web, one thing that has become possible is to experiment with variations of in-app test subjects and track how those differences impact user behavior. Another term to describe this: A/B testing for mobile apps.

And with the increasing popularity of free-to-play games and mobile commerce, the importance of optimizing through not only iteration of design but overall experience is quickly taking shape.

image

A/B testing mobile apps is great so long as you know what to measure.

Here are ten mobile metrics that will help both gaming and mobile commerce apps guide their optimization strategy:

1 – In-App Purchase (IAP) Conversion Rate. Clearly, keeping tabs on the rate at which free users become paid should be the primary goal for any app which monetizes through IAP.

2 – Unique Conversions. To get a clearer picture, you should not only be tracking the rate at which free users are converting but the volume of conversions that occur.

3 – Total Sessions. This refers to the number of times your app was run, and can be a key indicator of stickiness when combined with other measures like session frequency or session recency.

4 – Unique Devices. Because a single device may run multiple sessions in a given time period, tracking total sessions doesn’t give you a full picture of how many unique devices are running your app – which should be measured separately.

5 – Average IAP Value. Optimizing the in-app ‘upsell’ and increasing the amount of revenue earned per in-app purchase can be a key revenue driver that deserves regular monitoring.

6 – Average Revenue Per User (ARPU). Understanding how much revenue your app earns from one user on average during a specific period of time (weekly or monthly) will help gauge the health of your app’s monetization strategy relative to costs. ARPU is also a building block in the calculation of customer lifetime value (LTV) – a key metric for any business.

7 – Average Session Time. This refers to the average amount of time users spend in your app. In the context of optimization, how you measure this metrics will depend on your app. Longer session times may be a good optimization target for a social game that is looking to increase user engagement, whereas shorter session times may be better for a productivity app that is seeking to help you get something done faster – like sending email or scheduling a calendar event.

8 – User Retention Rate. If you’ve paid to acquire those new installs, you’ll know it’s important to optimize the rate at which you keep them active. Retention can be calculated using various combination of daily, weekly or monthly active users (DAU/WAU/MAU) – but a good baseline is to look at the ratio of DAU to total installs.

If you’re looking to improve these metrics through A/B testing your app, you should be aware of a couple additional measures that are specific to this optimization strategy.

Observed Improvement is basically the percentage change in the metric you’re measuring. It can be calculated as:

(New Value – Old Value) / Old Value x 100

Chance to Beat Original is a slightly more complex calculation measuring the statistical significance of the observed improvement. It takes into account the normal distribution of the difference in observed values to calculate a time-sensitive level of confidence.

23 10 / 2013



How many gas stations are there in the United States? Why are manhole covers circular? What’s 13 multiplied by 14? What environmental factors should we take note of when experimenting with new product features?

When I tell people that these were the questions I received at an interview for my first job out of college in 2009, they are less amused by the questions themselves than the fact that they were not part of the standardized and time-tested recruitment process of a multinational corporation or strategy consulting firm.

Far from a Microsoft or McKinsey recruiter, those questions came from the COO of a small but successful Montreal-based startup called iReel. I soon learned that the majority of my new colleagues had been imported from of one of the most trafficked adult video tube sites in the world – now wonder they were so damn good at SEO!

Beyond SEO, the degree to which we leveraged data in every part of the business – from traffic acquisition to conversion rate optimization – was light-years ahead of almost every digital marketing campaign that I’ve seen in practice since, to this day. Having never even looked at a Google Analytics report before my first day on the job at iReel, I thought it would be helpful to share how I was trained in a start-up environment to become fluent in the quantitative methods that make digital campaigns and businesses work better.

During the first two weeks of work, three other new hires and I (the new ‘Conversion Team’) were given a crash course in the basic algebra of digital marketing. Morning exercises like this became routine:

image

Within the first month, we were introduced to the concept of A/B testing and conversion rate optimization (CRO). In the offline world, a business’ conversion rate is referred to as sell-through, and is often the single highest leverage point in commerce-based business. Running our first A/B and multivariate tests forced our newly formed Conversion Team to recognize the significance of user data in driving product and design decisions.

Here’s why: our monthly bonus was tied to the percentage increase that we could affect in conversion rates.

‘Fantastic!’ we thought when planning our first experiment,

‘Our landing page design right now is really ugly, and the copy is not even grammatically correct’ we told each other,

‘We are going to design a beautiful new header banner, clean up this copy, and get a huge bonus at the end of the month’…

Our instincts underperformed by over 20%. Needless to say, we had a disappointingly bonus-less first month.

But in the months to come, we built up our CRO practice to a level that was on par or better than many of the world’s experts.

 Before making our own testing tool for not only landing pages but entire conversion funnels, we leveraged Google’s free website optimization tool (which has since been rolled into Google Analytics as content experiments) and ran dozens of multivariate landing page experiments to increase the rate at which visitors became customers by nearly 140%:

image

A multivariate experiment run using Google Website Optimizer, before it was integrated into Google Analytics.

Finally, we’d gotten that nice monthly bonus! Not only that, but the suspense of waiting to see whether our new copy or creative would outperform, and then crush the existing design was a great team-building experience. Big win.

image

iReel’s landing page after months of tweaking. Not the prettiest, but the most profitable!

Pretty quickly, we became enthusiastic about tweaking copy, banners, buttons and layouts. So much so that we had to start putting variations into a ‘testing pipeline’ because of concerns that we did not have enough traffic to determine a winning combination with confidence in a reasonable amount of time. The classic multivariate experimental design had become a bottleneck.

Our next move came from an area of business science that is often overlooked in the world of startups: Six Sigma. Also known as Taguchi designs, we decided to implement a fractional factorial experiment type that allowed us to extrapolate significant data gathered on a subset of multivariate combinations to predict which combination of the full factorial set would perform the best. Details on this method are really the subject of another post, but in the meantime I highly recommend reading more about Taguchi arrays and their application to marketing.

Our intense use of data didn’t stop (or start) at conversions. At iReel we drove a big chunk of our traffic from affiliate marketing sites focusing on Hollywood movies and feature films. We even had a team of four people dedicated to recruiting new and managing existing affiliate marketing partners. Like our competitors, we paid affiliates on an industry-standard Cost Per Action basis. 

Unlike our competitors, we optimized our ability to convert affiliate impressions into clicks by focusing on click-through rates. We developed a dynamic display ad unit, which we termed ‘SmartAds’ – that would rotate different creatives from a predefined pool. Initially, each creative would be displayed in equal proportion. For a pool of 10 creatives, each would be shown 1 out of 10 times. The SmartAd would then track the click-through rate of the various creatives in rotation and feed that data into a genetic algorithm that would assign a new weight to each creative, for each affiliate.

Together, the data-driven and automated marketing systems we put in place enabled iReel to coax affiliates away from our competitors despite paying a lower CPA. All thanks to our clever use of data. 

A quick example:

A particularly aggressive competitor would offer an $8 CPA to attract affiliates away from our $4 CPA program.

But the competitors’ ability to drive clicks and convert those into actions was only 1/4 of ours.

Result: Affiliates made 2x more money per month sending us their traffic, despite getting only 50% of the CPA unit revenue.

When it comes to tech startups, Montreal doesn’t exactly get the kind of recognition that Silicon Valley, Boston or New York City do. But business at iReel was nonetheless driven by business sense, and there was no excuse to ignore data if it could be leveraged to make more money.

It was such an awesome experience that I decided to build my own company, Splitforce, to bring the power of A/B and multivariate testing to mobile apps and games. We’ve already made a difference on our early users’ bottom lines. I’ll post detailed examples of how in the coming weeks, so stay tuned on Facebook or Twitter.

How have you built or experienced a culture of data-driven business? Join the discussion on YC HackerNews.

18 10 / 2013



We had the pleasure of being invited to present at the Coca-Cola Mobile Innovation Workshop in Shanghai, China the last week of September. Zac took the floor alongside Jon Li of Vibin to present to a group of Coke execs and digital strategists from Ogilvy on the importance of data-driven decisions and emergence of mobile analytics software solutions.
 
Check out Splitforce’s presentation for the Coca-Cola Mobile Innovation Workshop.
 
Many thanks to Barney Loehnis at Ogilvy, as well as Juan Adlercreutz and Olivia Chang at Coca-Cola for their invitation to participate and interest in Splitforce. We hope to see you again next year!

01 7 / 2013



Last Wednesday, Splitforce was announced to be the winner of the MobileMonday China Start-up Challenge 2013 at the GSMA Mobile Asia Expo in Shanghai. Splitforce had previously been selected as part of the top five best start-ups from a group of 13 which presented during a qualifying round organized by MobileMonday Shanghai last Monday.

Splitforce Co-Founder Zac Aghion presented to an audience of over 50 expo attendees and a panel of four mobile industry judges, including William Bao Bean (Managing Director, Singtel Innov8) and Kunal Sinha (Chief Knowledge Officer, Ogilvy & Mather). Mr. Aghion explained the importance of A/B testing for conversion optimization and defined a market opportunity to provide A/B testing software for mobile apps. Fielding questions from a panel of four judges, Zac went on to explain the product in more detail – from the easily-integrated mobile SDK to test management and analytics dashboards all hosted in a full-serviced cloud solution. 

The victory was announced by Jonathan Li, a mobile industry veteran turned entrepreneur. Li noted that Splitforce was selected as the winner thanks to an innovative product offering A/B testing for mobile apps and ability to provide real incremental value to the mobile app developer community. Following up close in 2nd place was EmilyAPP, a series of mobile-format tour guides for outbound Chinese tourists.

Splitforce allows mobile application developers to test multiple versions of their app simultaneously and track which one is performing the best. 

Drive Revenue. Increase Engagement. Improve ROI. Learn more at  www.Splitforce.com

24 6 / 2013



As a proud participant of Chinaccelerator’s Batch IV, Splitforce will be hitting the road Thursday to go meet and greet the masses in Beijing, Shanghai and Hangzhou with Geeks on a Train. A quick schedule for our groupies:

Thursday, June 13 – Overnight train around the Bohai Bay from Dalian to Beijing. No short supply of beers or Hubert. 

Friday, June 14 – Splitforce Founders Meetup. Strictly VIP.

Saturday, June 15 – 10x10 Beijing join us for pitches, drinks and shenanigans.

Monday, June 17 – Shanda Capital. Games. Money. Fun.

Tuesday, June 18 – Another midnight ride, this time down Souf to Shangers.

Wednesday, June 19/20 – Xinchejian. China’s trendiest hackerspace. People Squared. Shanghai Co-Working Space.

Friday, June 21– One-day sprint out to Hangzhou to see all of Alibaba’s computers and stuff.

Saturday, June 22 – 10x10 Shanghai making waves out in Yangpu District come check us out.

Feel free to give us a shout at hello@splitforce.com if you’re going to be crossing our paths or following us religiously. We’re always down to meet up for a coffee, beer or chat about mobile app optimization, A/B testing and the future.

LINKS

The masses - http://www.theepochtimes.com/n2/china-news/beijing-subway-seizes-up-as-traffic-taken-off-road-1460.html

10x10 Beijing - http://www.theepochtimes.com/n2/china-news/beijing-subway-seizes-up-as-traffic-taken-off-road-1460.html 

10x10 Shanghai - http://www.eventbrite.com/event/7030493395/efblike 

the future - http://www.technologyreview.com/news/515666/contact-lens-computer-like-google-glass-without-the-glasses/