What is an effective offer on the Starbucks app?

Dawit Hassen
10 min readSep 10, 2020

1. Introduction

Once every few days, Starbucks sends out an offer to users of the mobile app. An offer can be merely an advertisement for a drink or an actual offer such as a discount or BOGO (buy one get one free). Some users might not receive any offers during certain weeks. Not all users receive the same offer, and that is the challenge to solve with this data set.
In this project combine transaction, demographic and offer data to determine which demographic groups respond best to which offer type. This data set is a simplified version of the real Starbucks app because the underlying simulator only has one product whereas Starbucks actually sells dozens of products.
Every offer has a validity period before the offer expires. As an example, a BOGO offer might be valid for only 5 days. You’ll see in the data set that informational offers have a validity period even though these ads are merely providing information about a product; for example, if an informational offer has 7 days of validity, you can assume the customer is feeling the influence of the offer for 7 days after receiving the advertisement.

2. Business Understanding

he program used to create the data simulates how people make purchasing decisions and how those decisions are influenced by promotional offers.

There are three types of offers that can be sent: buy-one-get-one (BOGO), discount, and informational. In a BOGO offer, a user needs to spend a certain amount to get a reward equal to that threshold amount. In a discount, a user gains a reward equal to a fraction of the amount spent. In an informational offer, there is no reward, but neither is there a required amount that the user is expected to spend. Offers can be delivered via multiple channels.

We are interested to answer the following two questions:

  1. What are user characteristics and demographics?
  2. Which offer should be sent to a particular customer to let the customer buy more?
  3. A user would take up an offer?
  4. Which demographic groups respond best to which offer type?

3. Dataset Description

The data is contained in three files:-

  1. portfolio.json — containing offer ids and metadata about each offer (duration, type, etc.)
  2. profile.json — demographic data for each customer
  3. transcript.json — records for “transactions”, “offers received”, “offers viewed”, and “offers completed”

Here is the schema and explanation of each variable in the files:

3.1. portfolio.json

· id (string) — offer id
· offer_type (string) — the type of offer ie BOGO, discount, informational
· difficulty (int) — the minimum required to spend to complete an offer
· reward (int) — the reward is given for completing an offer
· duration (int) — time for the offer to be open, in days
· channels (list of strings)

3.2. profile.json

age (int) — age of the customer became_member_on (int) — the date when customer created an app account
· gender (str) — gender of the customer (note some entries contain ‘O’ for other rather than M or F)
· id (str) — customer-id
· income (float) — customer’s income

3.3. transcript.json

· event (str) — record description (ie transaction, offer received, offer viewed, etc.)
· person (str) — customer-id
· time (int) — time in hours since the start of the test. The data begins at time t=0
· value — (dict of strings) — either an offer id or transaction amount depending on the record

4. Data Exploration

In order to understand the problem better, first, we need to explore the datasets which include checking the missing value, visualizing the data distribution, etc. In that way, we can have a better understanding of how the dataset looks like and how to select the important features to support the model implementation.

Figure 1. Portfolio dataset

As shown in Figure 1, there are no missing values in the portfolio dataset.

Figure 2. profile dataset

From the first 5 lines, we can already see some null values in gender and income, while the age the column contains some values that don't make sense (e.g. 118)

Figure 3. Profile dataset

As we can see above Figure 3, the age=118 column corresponds with the null gender and income columns. Thus, we can actually drop them during preprocessing if they do not take too large a proportion of our data.

Figure 4. transcript dataset

As we can see above Figure 4, customer purchases, and when they received, viewed, and completed an offer. An offer is only successful when a customer both views an offer and meets or exceeds its difficulty within the offer’s duration.

4.1. Visualizing the data distribution

4.1.1. What are user characteristics and demographics?

Figure 5. shows the customer income vs Gender. As shown in the graph male customer are more, that may be because their income is higher as shown in the above chart.

Figure 5. Income vs Gender

Figure 6. shows the distribution of customer income. As shown in the graph the highest income between 60,000–80,000.

Figure 6. distribution of customer income

Figure 7. shows the age range of the customer. As shown in the graph the highest range between 50–60.

Figure 7. Age range

4.1.2. A user would take up an offer?

Interest in the concept of measuring customer satisfaction is growing. More and more companies want to know how satisfied their customers are. Customer satisfaction drives business success and data scientist provides insight into what customers think.

There are three types of offers:-

  1. BOGO(buy one get one free)
  2. Discount
  3. Informational

Figure 8. shows the three offers. As shown in the graph the BOGO, discount, and informational offer type has the highest record on offer received. the lowest offer completed recorded on BOGO. On the informational offer, the type has only offer received and viewed.

Figure 8. Offer types

Figure 9. shows the offers received by the customer. BOGO has the highest record on offers received by the customer.

Figure 9. Offer received

Figure 10. shows the percentage distribution of the offers. BOGO has the highest distribution of the three types of offers.

Figure 10. Offer Distribution

Based on Figures 4, 5, and 6, most of the customers didn’t take the offer.

5. Data Preprocessing

First, let had to explore what kind of events are within each offer type. In order to identify the main drivers of an effective offer, I have to first define what an ‘effective’ offer is within the Starbucks app. In order to find out what mainly affect the finish of the transaction by sending the offer, in the data processing process, also need to process the data to merge the events of each specific offer sent so as to find out which offer was received, viewed and finally completed with a transaction.

Figure 11. Offer list

As we can see above Figure 11, there are four groups of people:-

a. People who are influenced and successfully convert — effective offers.
b. People who received and viewed an offer but did not successfully convert — ineffective offers.
c. People who purchase/complete offers regardless of the awareness of any offers.
d. People who received offers but no action taken.

. Prepare the date set, set the features variable and target columns.

Figure 12 Variable and Target columns
  • Split the data set into training and test sets
Figure 13. Split the data set into training and test set

6. Building Models

6.1 Algorithm

To build the models I use Random Forests Classifier (RF) is an ensemble learning method for classification and Decision Tree Classifier (DT)is a predictive model which is a mapping from observations about an item to conclusions about its target value. To test our prediction, I split the dataset into the training set and the testing set. We want to use the previous records (earlier time) as the training set to build our model, and then use the same model to test the later records. It will be a good test to test whether our model predicts customer response correctly.

6.2 Metrics

A note on model evaluation and validation; accuracy and f1 score as the model evaluation metric. F1 score provides a better sense of model performance compared to pure accuracy as takes both false positives and false negatives in the calculation. With an uneven class distribution, F1 may usually be more useful than accuracy. It is worth noting in this case that the F1 score is based on the harmonic mean of precision and recall, and focuses on positive cases. For the Starbucks app here, it would be fine as we would prioritize more on whether offers are effective, and less focus on why offers are ineffective.

6.3. Building model

After pre-processing the data, the next steps we’ll start to implement models to figure out which factors affect most whether the customer will respond to the offer or not. Therefore, we’ll use the ‘offer_responded’ flag in the dataset to build models to predict if the customer will respond to the offer of not. Since we have 3 offer types, there are thus 3 different models to be built for three different offers types. Since we are predicting whether an offer would be effective or not, this is effectively a binary classification supervised learning model. We have also built a model that predicts offer based on different factors:-

  1. BOGO offers model, the accuracy for Random Forest Classifier (RF) model actually ends up outperforming the Decision Tree Classifier (DT) model slightly, but overall the performance for both models is about the same (82.14% vs 81.77% respectively in terms of accuracy). Accuracy for a first attempt is quite good, more than 80%. I will try to tune the model further to get better accuracy. However, in terms of the F1 score, both models are below 80%, with the Random Forest model performing worse compared to the Decision Tree Classifier, with 75.91% vs. 79.63%.For the BOGO;
  2. Discount offers model , this time, the Random Forest Classifier model also has a better performance compared to the Decision Tree Classifier in terms of accuracy (87.23% vs 86.72%), and the F1 score is also lower (81.43% vs 82.87%). The F1 score for these models is lower overall compared to the Accuracy score. This could be an indication that there are some instances where both models are classifying the negative cases (effective_offer = 0) falsely. Again, I am not too bothered by this as I am more concerned with the model predicting positive cases accurately, so would rather go with a higher accuracy model where the F1 score for cases effective_offer=1 is higher, for which our RF classifier has better performance (0.9317 vs 0.9280). This time, the Random Forest Classifier model also has a better performance compared to the Decision Tree Classifier in terms of accuracy (87.23% vs 86.72%), and the F1 score is also lower (81.43% vs 82.87%).
  3. Informational offers model, the performance for these models is worse compared to the other 2 datasets, with an accuracy below 80% for both models, but the RF model still performing better. The F1 score is also worse, at 67.54% RF Classifier, worse than the DT model at 68.66%.

7. Model tuning

This section will attempt to tune the parameters of the initial model to get higher performance. In the tuning section, we will use GridSearch to search for parameters that are likely to get better model performance. I will first try parameter tuning for the 3 RF models, before experimenting with removing or adding features to improve model performance.

Since I will be comparing the models based on the testing score repeatedly, I built a function to find the best RF model results based on refinement depending on the offer type. I decided to do GridSearch to determine what would be the optimal parameters for the model.

  1. BOGO offers model, the accuracy for the RF model increased slightly — from 82.14% to 82.51%, and the F1 score increased from 75.91% to 77.64%. This is a good performance increase but minimal, which indicates that perhaps there’s not much that can be done to improve the performance of the model with parameter tuning.
  2. Discount offers model, the accuracy of the model increased slightly, from 87.23% to 87.47%, and the F1 score improved from 81.43% to 82.06%. The good thing is that now both the accuracy and the F1 score for the RF model is better than the DT model.

Conclusion

Overall, we can see that the top-performing models are the 2nd model (with GridSearch to find optimal model parameters) for predicting the effectiveness of BOGO and discount offers, whereas the best performing model for informational offers was just after performing GridSearch to find the optimal parameters.

My decision to use 2 separate models to predict the effectiveness of each offer type ended up with good accuracy for the BOGO and discount models (82.83% for BOGO and 87.35% for discount), while slightly less accurate performance for informational offers (75.3%). However, I would regard 75% as acceptable in a business setting, as for informational offers, there is no cost involved to inform users of a product.

From the result of the project, it’s likely to use machine learning model to predict whether the customer will respond to the offer or not, and the model also shows the main factors such as the length of membership, age, income which highly affect the possibility of customer’s responding to the offer.

--

--