Did Uber just run its first test to boost revenue in India?

Santanu Bhattacharya May 23, 2017 5 min

Once a product guy, always a product guy! I just noticed some neat A/B tests that made me wonder if Uber is running an experiment to boost revenue from its super users in India.

To the uninitiated, A/B testing is form of experiment where two features, called A and B, are tested among statistically similar groups to figure out if the new features would be desirable in its targeted user groups. It is often used for testing web and email marketing campaigns and increasingly, mobile apps. Following is a simple example: for an identical design, changing the word from “Buy” to “Try” more than doubled the conversion.

A/B Test example. Changing a single word more than doubles the conversion rate

My user profile:

Before I narrate Uber’s test, let me describe what type of Uber user I am. After moving from the San Francisco Bay Area in 2015 to Delhi, India, I decided not to buy cars. Here was the simple economics: a mid-grade car with everything (lease/payments, expensive fuel in India, maintenance etc.) cost anywhere from Rs 40–60,000 per month ($600–900). My monthly Uber rides are rarely over Rs 10,000 ($160), plus the convenience of not driving in insane Indian traffic is a big one. While $160 may not sound much in the USA, in India it probably made me among the top 1–5% of Uber riders.

Why Uber is such a good choice in India

Also read: Uber Elevate kickstarts flight of fancy for aerial urban mobility. Will it take off?

The test I am intrigued by:

Last Thursday, while booking my Uber, I noticed a new feature. Instead of showing me the price and asking me to confirm the same before booking the car, I got the following prompt.

Uber prompt for booking a car without viewing fare

I intuitively knew this was an A/B test; in fact, I’ve been wondering for a while what kind of monetisation tests Uber would run as it battles for supremacy in the Indian market, given that its fares are too low to make the business profitable.

Features of the test:

Without the benefit of being an Uber insider, here are my best guesses for key features of this test:

  • Target Audience: Heavy users who spend a significant amount of money and seem to use Uber for most, if not all their transportation needs
  • Test: Price sensitivity. If people book their car without checking the price, they are likely to be less sensitive to the price they pay for the ride
  • Business Objective: Revenue and profitability

To a Mobile B-to-C product person like me, the test looked very well designed. Notice the text “your total fare would be updated in the trip feed once you connect with the nearest driver”.

1. Since you have to take an extra step to check the fare in the trip feed, it’s likely some riders would not do so. You can call this group “highly insensitive to price”
2. Among the people who skipped viewing fare while booking but did come to the check the price later can be called “somewhat sensitive to price”
3. People who clicked “Cancel” on the prompt and did check the price before booking, are “sensitive to price”

Testing my hypothesis

Though I have a limited amount of data; only one user (me) and less than 10 rides, the early indicators are that there is indeed some difference in pricing. Following is the average price I paid on the trips where I was “sensitive” to price — eg, clicked “Cancel” on the prompt — and “highly Insensitive” to price — eg, clicked “Continue” and did not check the price in the trip feed.

It does appear that being highly insensitive to price has its price  —  about 23% higher price (no pun intended)

Notice that the price I paid when I was highly insensitive to pricing was about 23% higher. Some of the difference in pricing is likely to be factors such as delay in traffic, which, because I travel short distances during office hours, tend to contribute to 10–15% fluctuation in fares. To remove such biases, I have taken an average of trips in both directions (back and forth from home to work) and different times (morning and evening hours). But given the small datasets, I expect this test to have some bias.

What Uber could potentially be doing:

Uber has multiple challenges in India: the top two being a brutal competition with local player Ola and a cash burn that shows no signs of receding. At the same time, just like any other market, it knows it has a “core” group of users who are so hooked to the Uber experience that they would not mind paying more to continue having the same.
The test is to potentially to discover core #Uber users and see how much extra they will pay  —  tweet this.

The ultimate question: Why would anyone pay more for the same ride?

For me, at least a few reasons:

  • I save 75–85% of my monthly transportation cost and enjoy the freedom and flexibility it provides. I know Uber can’t burn VC money forever and I don’t want it to fail in the long run.
  • I also expect that in exchange for a higher price, Uber would eventually provide a differentiated service for its core users. While I don’t work for Uber and have no insider news, it could provide a differentiated in the form of better-rated drivers, newer cars with better amenities, perhaps even no peak pricing. Rival Ola already has a similar service called OlaSelect, in the form of a monthly subscription service.

Also read: Ola loses more money as it battles Uber; revenue climbs steadily

Epilogue:

It’s fun being on the sidelines and watching the taxi ride industry play itself out in India. As Uber (I rarely use Ola) rolls out more test, I would keep you in the loop.

For all product managers reading this, let’s have some more fun. Please comment on:

  • how could you help me collect more data to support or disprove my hypothesis?
  • what else should Uber be doing to improve profitability in India?
  • is Uber running any other tests that you can infer from using the product?

Also read: Amazon vs Flipkart and Uber vs Ola. It’s not about capital dumping — it’s about good and bad investments