Taylor Rodgers
  • Home
  • Books
  • Free Tools
    • Requirement Doc Templates
    • Quality Checking Guide
  • Blogs
    • Data Science
    • R Programming
  • Consulting
    • Statistical Consultant
  • Contact
  • Home
  • Books
  • Free Tools
    • Requirement Doc Templates
    • Quality Checking Guide
  • Blogs
    • Data Science
    • R Programming
  • Consulting
    • Statistical Consultant
  • Contact

How to Deliver Real Analysis – And Escape the Reporting Trap for Good

11/19/2019

0 Comments

 
Picture
Many businesses in the analytics space say they want more insights – and they expect their analysts to come up with those insights.

But not many analysts know how to deliver that. Most don’t do anything analysis related at all. They find themselves trapped in the world of reporting, where they crank out dashboards and KPIs on a regular basis, while begrudging their executive overlords for telling them to “include more insights!”

I call this the “reporting trap.” And businesses and BI professionals alike want to escape it. Only they don’t know how.

How to Escape the Reporting Trap – With Tim Wilson

A couple of weeks ago, I interviewed Tim Wilson, Senior Analytics Director of Search Discovery (and co-host of the Digital Analytics Power Hour podcast), for a book I’m writing. We got into a detailed conversation about processes and he provided a solution to one of the biggest puzzles facing BI professionals – how to escape the reporting trap.
Picture
The first thing he does is find out if his project is a reporting project or an analysis project. (He uses the terms performance measurement and hypothesis testing.)

“Performance measurement tends to be recurring,” said Tim. “There needs to be a daily, weekly, or monthly report. There aren’t really insights tied to it. Hypothesis validation is inherently not on a schedule. You don’t get weekly insights about your website, because you’re not changing your website every week.”

In other words, if it sounds recurring, then it’s a measurement. If it sounds like a one-off question (i.e., “could our new campaign be better?”), it’s a hypothesis.

Turn Questions Into Testable Hypotheses

The challenging part is taking those one-off questions and turning them into  hypotheses to test – because real, testable hypotheses are key to good analysis projects.

Using the example of a new marketing campaign, Tim suggested that analysts probe for assumptions the stakeholders have.  “What’s their speculation for why this might not be working? Does the creative suck? Are the fonts too small?”

He would then reframe those assumptions into a “we believe X” and “If we are right, then X” format, which makes it a hypothesis to test.

Here’s an example of that:
We believe there are specific landing pages whose visitors are more likely to convert to a lead.

If we are right, we will experiment with shifting paid media (search and social) to point to those landing pages (with appropriate CTAs).
Tim Wilson said he would then build a “hypothesis library,” where he stores the various hypotheses that come up during these discussions.

For a single stakeholder, this hypothesis library should grow quickly and by the end of the campaign launch, analysts should have ample hypotheses to test.

And here’s the cool thing that really shocked me – reporting projects should be kept to a minimal and analysis projects should become the norm after the campaign launch.

“[By the point of the launch], the analyst's time on performance measurement is minimal. They immediately jump over to the other path – the hypothesis validation.”

The hypothesis library turns into a task queue. The analysts will evaluate each hypothesis based on level of difficulty and how important it is to the stakeholder.

“That is the core of prioritizing the analyst’s time,” he said. “Some hypotheses require an A/B test. Some hypotheses require a data scientist. Some hypotheses require standing up a survey. So it’s not just a ‘here’s two fields to prioritize the hypothesis and now work from the top of the list down.’ It’s a little messier than that.”

This Approach Still Requires Proper Discovery / Requirements

After speaking with Tim, I realized that this approach requires changing our mindsets during the requirements stage. Instead of approaching requirements believing “we need to determine the KPIs,” go in believing “we need to determine KPIs and hypotheses to test.”

But one thing remains the same – requirement discussions are still a necessity. Whether it’s analysis or reporting, you simply cannot deliver high quality work without requirements.

For that, you need a good requirements document. If you don't know what that looks like, I developed a template for analysis projects that uses the "We believe... / If we are right..." format for hypothesis generation.

Visit this link here to get a free copy.
0 Comments



Leave a Reply.

    ABOUT

    A blog about the non-technical side of data science.

      SUBSCRIBE

    Confirm

    ARCHIVES

    April 2022
    March 2022
    October 2021
    September 2021
    August 2021
    March 2021
    February 2021
    January 2021
    November 2020
    August 2020
    June 2020
    May 2020
    April 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    March 2019

    RSS Feed