Perceptions on the Use of Landmarks

The focus for our research project is predominantly on landmarks, and how they are perceived, inhabited, and used by their visitors in relation to their history and purpose. Our research study…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Driving user loyalty through rapid experimentation

by Brenda Lienh and Rachel Furst

Experimentation is all about trying new things and seeing what works. When you pair up the right audience with optimal conditions, you can achieve some amazing results. So, what happens when you bring two experience-focused, experimentation-obsessed, type-A product managers together at a trendy Los Angeles coffee spot called The Butcher, Baker and Cappuccino Maker? Magic.

Or…an extremely detailed blog post about how we have sought to change the way our teams work and use experimentation to reach our goals.

Us in Los Angeles, fully caffeinated and nerding out for 2+ hrs.

To provide some context, Rachel leads the Engagement team and Brenda leads the Membership Growth team at Insider. We both realized that there were some common goals between our two teams, especially as the company has continued to scale and grow. We share a keen interest in understanding how we could better serve our users and get them to come back to Insider more often. This fateful meeting ultimately has helped benefit the company overall, as more return users means more eyeballs and engagement onsite, while also increasing the potential for these same users to enter into our membership ecosystem by signing up for a newsletter, downloading our app, creating an account, and/or becoming a subscriber.

Engagement features are often nebulous in concept, large in scope, and time consuming to build. And results can be hard to measure. To try and address these potential pitfalls while increasing output, we landed on a process of experimentation. Experimentation is the best way to quickly and iteratively test out ideas. It allows teams to succeed quickly — and also to fail quickly — so that they can learn from it and pivot, or persevere accordingly.

Embracing our shared goal also meant combining our teams, so we formed the joint Loyalty Team. To ensure that the teams were truly integrated, the combined group included product and engineering resources and utilized a new way of working based on a “pod” concept. Pods were created to focus on specific areas, which we call “loyalty zones”.

The breakout of our Pod structure.

The program itself effectively became an experiment: would fundamentally changing the way we work together help us better achieve our company goals?

Profoundly changing how a team works together is difficult. Changing up the way of working for multiple teams is even harder. To make sure that our two teams would work efficiently together, we had to make sure that everyone was crystal clear on what the plans and goals were.

To help get the parties on the same page, we focused on several key areas, including:

Loyalty Zones: There are many ways that our “Engagement Experimentation” could have gone. By going through the brainstorming exercise ourselves, we were able to identify key zones of focus that created new opportunities and were measurable. Creating Loyalty Zones helped us narrow down the KPIs and led to more effective and productive brainstorming sessions.

Defining KPIs and Proxy Metrics: This step required coordinating with company leadership as well as the Product and Engineering teams to ensure that everyone was clear on which metrics we would be aiming to move and why. It was especially important in this step to confer with Product Intelligence to make sure that the metrics we were choosing made sense and were measurable. The end product from this exercise was a central dashboard where we could measure top-line metrics and compare them to a baseline.

Expectation setting: We knew that doing something new and different would be difficult. We had idealistic expectations of how each sprint would go, but we knew the process would not be smooth right off the bat. To make that clear, we used what is now the infamous “chunky smoothie” analogy: our experimentation program (much like our experiments themselves) was like making a smoothie. At the beginning, it would be like when your blender first starts blending your whole chunks of frozen fruit — clunky and a little messy. But after a little bit of time and adjustment, we’d end up with a nice, smooth smoothie.

Cottonbro/Pexels

Explaining the end goal while acknowledging that it was idealistic and would need some work to get there helped our team understand that things would not be perfect out the gate, and that we sometimes would need to make adjustments. It helped give us cover and understanding when we needed to do things differently — like adjusting the way we refined tickets, changing up our meeting schedules, or shifting our focuses — and empowered the team to speak up to improve our process.

Brainstorming: Once we defined our zones of focus and KPIs and proxy metrics, we could move into brainstorming. We had two sessions — one with our partners on the Business and Editorial side and one with the Product & Tech teams. For each of these, we set up the brainstorm by laying out the guardrails, metrics, and testing plan. The guardrails were simple: the ideas would have to be for the App, Newsletter, or Homepage. These should result in small, iterative experiments that could be done within a “sprint.”

By doing this, we ensured that folks knew what limitations they had so that we spent time discussing experiment ideas that were feasible and viable for us.

This helped us get our first set of ideas to attempt to validate — doing analyses and running user tests to understand if they had potential. It’s important to jumpstart experimentation with a good backlog of ideas so that we could keep the fast-paced, iterative cycle going.

Getting buy-in: We got buy-in by doing a “roadshow” of our experiment program plan to stakeholders and partners from across the organization.

Three key factors played into our success:

You might think that once we had buy-in from the right stakeholders, we’d be ready to dive right into experimentation. In reality, the real work was about to begin. If we wanted to minimize the chunky part of our smoothie phase, we had to keenly plan and think ahead.

The first piece of planning we needed to do was timing the work cadence of each group in our experimentation cycle: Product, UXD, Data, and Engineering. We found keeping up with a demanding experimentation cycle (where we were releasing and winding down experiments for every sprint) required Product to be thinking two steps ahead of Engineering, and one sprint ahead of UXD and Data. Here’s an example of what that might look like:

Knowing we’d need to have experiments lined up for three sprints at the start of experimentation made us realize we had to dig into the backlog of our brainstorm ideas and vet them more closely:

These are some of the questions we had to ask ourselves to make smarter decisions around the first experiments we tried.

The preparation didn’t end there. (We know, we know, you’re thinking: you had to do this much work before you even started to experiment?! Well, as Rachel’s mom likes to say: “chance favors a prepared person.”) If we were going to reap the full benefits of experimentation, we knew we had to plan as much as we could in advance, and remove all of the roadblocks we could foresee.

One such roadblock was setting guidelines for our design team. To ensure we didn’t slow down the design team, we came up with a list of potential design blockers, such as the removal of ads, guardrails around pop-ups, and awareness of overcrowding pages with CTAs. Admittedly, we didn’t quite catch all of these before our experimentation process (and as a result, ended up slowing down or stressing out our design team — sorry, guys!), but we did learn about the importance of getting ahead of these potential blockers.

Another key roadblock we had to mitigate was ensuring stakeholder alignment. You can do this by communicating your experiment ideas with them, along with preliminary designs and launch schedules so they know what to expect and how to plan for them. We got ahead of this by sharing a slide deck with stakeholders of all of our planned upcoming experiments, their associated hypotheses and proposed designs.

But we weren’t perfect at this, of course. For one experiment, we notified stakeholders too late. They had campaigns that conflicted with our proposed launch date. Not wanting to delay our launch, we had to compromise on the test traffic size and duration. It was not the end of the world, but a reminder that stakeholder buy-in was critical for the full success of our experiments.

Ryan Thomas Hewitt/LinkedIn

Now that you have a sense of all of the background prep that went into starting up experimentation, you might be wondering, “what actual work did the team need to account for while actively experimenting”?

Here’s where we introduce the “build, measure, learn” cycle.

The BML cycle is a tenet of product development. As part of our experimentation on the Loyalty Team, we followed the methodology of the BML cycle because its principles enable teams to fail and learn fast.

At the onset of the build phase, it’s key to start with an MVP, or minimum viable product, which is the simplest form of an idea that demonstrates its functionality. The reason to start with an MVP is to save time and effort, a development team’s most precious resources. If the idea fails, you can rest assured that you didn’t spend more than you needed to prove the idea. If the idea proves to be a winner, then you can invest more time into refining the idea and optimizing its value.

The way you know you’ve designed an MVP is when you’ve looked at every aspect of the product you’re building and asked: is this essential or an enhancement? Once you’ve cut out all of the enhancements, you know you’ve reached an MVP.

For example, for one of our newsletter experiments, we wanted to remove a newsletter sign up CTA from appearing once a user signed up. The test’s goal, though, was to validate whether the new CTA would even attract sign-ups. So, before we took steps to enhance the CTA by having it hidden for subscribers, we narrowed the scope of the experiment to only test the new CTA, knowing that if it was successful we’d enhance the experience after, thereby saving us costs upfront.

While the development team is building out the MVP, it’s a good time to collaborate with the data team. We often ensured this on the Loyalty Team by having a bi-weekly meeting set up with our Data Team partners. We spent half the meeting looking at the outcomes of prior tests and the other half on upcoming tests.

Types of tests: When thinking about testing an experiment, it’s important to determine the type of test you want to run, the KPIs you’ll be looking at, and the success benchmarks needed to roll out the new feature. On the Loyalty Team, we did a combination of tests. We ran A/B tests, time period comparisons, and, in some cases, we just rolled out an experiment from the start.

Success benchmarks: Success benchmarks are needed to determine whether an experimental feature is worth rolling out more widely. You might be thinking, “if that feature wasn’t there before, wouldn’t any engagement be an improvement?” Sure, but when you think about the limited real estate you have on a web page, and the even more limited window you have to grab a user’s attention, you want to make sure you’re only presenting them the best of the best. So scrapping something that’s moderately effective is a good idea to leave room for something even better.

How do you determine success benchmarks? By identifying KPIs, or essentially the metrics that demonstrate your users are behaving as you wanted them to. For example, for our newsletter tests, we knew that a KPI we wanted to track was the number of sign-ups. For a given test, if we saw sign-ups increase, especially if we saw them increase to the same levels or beyond of other newsletter CTAs we had implemented, then we knew the test was a winner.

However, determining the test type, success benchmark, and KPIs shouldn’t be done in a vacuum, but in partnership with your data team. They are experts in data, and so it’s necessary to get their input and perspective on the tests you want to run. They will also be instrumental in helping you interpret the data once a test is complete.

Test Duration: This brings us to the last part of the “measure phase” of the BML cycle: determining test duration. The ideal test length is simple; it is the point in time when you’ve reached statistical significance. Since experimentation is best when done quickly, you don’t want to set an arbitrary test duration. Instead, launch the test, monitor its performance and, as soon as you see it winning (or failing!), stop and move on to your next step.

Learning comes as you interpret the results of your experiment. Was it successful? Was it a failure? Either way, you’ve now reached a pivotal point in the cycle: should you pivot or persevere? If the test was successful, think about how you can enhance the feature to make it better. This might mean deviating from the plan you set out at the beginning of experimentation. That’s ok. It’s worth iterating on a winning idea instead of starting from scratch on a new idea. If the idea didn’t work out, or perhaps there are no new enhancements to be made, then move on to the next idea and start the BML cycle again.

So how did we do?

In our first quarter of experimentation, we experienced a lot of success against our goals and learned so much more about the behaviors of our users.

There were also successes outside of just our main metrics. While our “pod” team structure will not always be applicable to every team, we found it to be very successful in encouraging cross-team collaboration and knowledge sharing. This experience gave our engineers an opportunity to work and build outside of their normal focus areas while also exposing them to new ways of working!

Overall, experimentation is an effective way to quickly try new concepts while learning a lot about your users and what works best for them. It requires planning, patience and open-mindedness, but when done well, it can be a powerful tool for any organization.

A huge thank you to the business stakeholders and leadership for their collaboration, and of course, the amazing team members below that made this possible and put their faith in us!

Amer Farge, Ariel Jakubowski, Audrey Hazdovac, David Torres, Devon Darrow, Eric Zou, Gabriel Symons, Harry Hope, Ingrid Lederman, Jenna Tart, Joan Gonzalez, Lucas Gati, Luiz Cieslak, Marcus Lyons, Nick Trefz, Nicolas Kivatinetz, Reuben Ingber, Samantha Chu, Samir Yahyazade, Sarah Canavan, Shilpa Bhagat, Shirin Bansal, Sofie Coopersmith, Starr Chen, and Zukhra Murray

Add a comment

Related posts:

The ultimate personal problem

sometimes i just want to be like other people, How they can pay attention to anything for more than 3 minuets at a time, can be organized and have whole ass systems to keep themselves on top of…

Would You Hire Yourself to Live Your Life

There are so many tasks during the day that we don’t want to do or are not in a position to do and the best way to avoid them is to either completely eliminate them from the list or hire someone else…

The Future of Media

A question that has been continuously asked throughout recent times is how the media will evolve and change as new technological innovations come into the limelight. There’s no debate that print…